Built-in connection pooler

Started by Konstantin Knizhnikalmost 7 years ago74 messages
#1Konstantin Knizhnik
k.knizhnik@postgrespro.ru
1 attachment(s)

Hi hacker,

I am working for some time under built-in connection pooler for Postgres:
/messages/by-id/a866346d-5582-e8e8-2492-fd32732b0783@postgrespro.ru

Unlike existed external pooler, this solution supports session semantic
for pooled connections: you can use temporary tables, prepared
statements, GUCs,...
But to make it possible I need to store/restore session context.
It is not so difficult, but it requires significant number of changes in
Postgres code.
It will be committed in PgProEE-12 version of Postgres Professional
version of Postgres,
but I realize that there are few changes to commit it to mainstream
version of Postgres.

Dimitri Fontaine proposed to develop much simpler version of pooler
which can be community:

The main idea I want to pursue is the following:

- only solve the “idle connection” problem, nothing else, making idle connection basically free
- implement a layer in between a connection and a session, managing a “client backend” pool
- use the ability to give a socket to another process, as you did, so that the pool is not a proxy
- allow re-using of a backend for a new session only when it is safe to do so

Unfortunately, we have not found a way to support SSL connections with
socket redirection.
So I have implemented solution with traditional proxy approach.
If client changes session context (creates temporary tables, set GUC
values, prepare statements,...) then its backend becomes "tainted"
and is not more participate in pooling. It is now dedicated to this
backend. But it still receives data through proxy.
Once this client is disconnected, tainted backend is terminated.

Built-in proxy accepts connection on special port (by default 6543).
If you connect to standard port, then normal Postgres backends will be
launched and there is no difference with vanilla Postgres .
And if you connect to proxy port, then connection is redirected to one
of proxy workers which then perform scheduling of all sessions, assigned
to it.
There is currently on migration of sessions between proxies. There are
three policies of assigning session to proxy:
random, round-robin and load-balancing.

The main differences with pgbouncer&K are that:

1. It is embedded and requires no extra steps for installation and
configurations.
2. It is not single threaded (no bottleneck)
3. It supports all clients (if client needs session semantic, then it
will be implicitly given dedicated backend)

Some performance results (pgbench -S -n):

#Connections
Proxy
Proxy/SSL
Direct
Direct/SSL
1
13752
12396
17443
15762
10
53415
59615
68334
85885
1000
60152
20445
60003
24047

Proxy configuration is the following:

session_pool_size = 4
connection_proxies = 2

postgres=# select * from pg_pooler_state();
 pid  | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | tx_bytes | rx_bytes | n_transactions
------+-----------+---------------+---------+------------+----------------------+----------+----------+----------------

 1310 |         1 |             0 |       1 |          4
|                    0 | 10324739 |  9834981 |         156388
 1311 |         0 |             0 |       1 |          4
|                    0 | 10430566 |  9936634 |         158007
(2 rows)

This implementation contains much less changes to Postgres core (it is
more like invoking pgbouncer as Postgres worker).
The main things I have added are:
1. Mechanism for sending socket to a process (needed to redirect
connection to proxy)
2. Support of edge pooling mode for epoll (needed to multiplex reads and
writes)
3. Library libpqconn for establishing libpq connection from core

Proxy version of built-in connection pool is in conn_proxy branch in the
following GIT repository:
https://github.com/postgrespro/postgresql.builtin_pool.git

Also I attach patch to the master to this mail.
Will be please to receive your comments.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-1.patchtext/x-patch; name=builtin_connection_proxy-1.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e94b305..ee12562 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -704,6 +704,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+		 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+		  "session_pool_size*connection_proxies*databases*roles.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..07f4202
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 4321, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolera, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. So developers of client applications still have a choice
+    either to avoid using session-specific operations either not to use pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 5dfdf54..8747b7f 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 96d196d..32d0c77 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 6036b73..99d2da9 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -467,6 +468,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 588c1ec..f6e1daf 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -553,6 +553,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 0c9593d..4a6d01d 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..83c97c5
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,164 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index af35cfb..5890445 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..8edd93d
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index eedc617..1fd5878 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -413,7 +430,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -435,6 +451,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -487,6 +504,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -569,6 +588,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy*)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -581,6 +642,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1013,6 +1077,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1036,32 +1105,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1130,29 +1203,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1162,6 +1238,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1383,6 +1473,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1620,6 +1712,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1710,14 +1853,26 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
-						/*
-						 * We no longer need the open socket or port structure
-						 * in this process
-						 */
-						StreamClose(port->sock);
-						ConnFree(port);
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+							ConnFree(port);
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+
+							/*
+							 * We no longer need the open socket or port structure
+							 * in this process
+							 */
+							StreamClose(port->sock);
+							ConnFree(port);
+						}
 					}
 				}
 			}
@@ -1789,6 +1944,8 @@ ServerLoop(void)
 		if (StartWorkerNeeded || HaveCrashedWorker)
 			maybe_start_bgworkers();
 
+		StartConnectionProxies();
+
 #ifdef HAVE_PTHREAD_IS_THREADED_NP
 
 		/*
@@ -1905,8 +2062,6 @@ ProcessStartupPacket(Port *port, bool SSLdone)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 	if (pq_getbytes((char *) &len, 4) == EOF)
@@ -1955,6 +2110,15 @@ ProcessStartupPacket(Port *port, bool SSLdone)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, SSLdone);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool SSLdone)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2029,7 +2193,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2694,6 +2858,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2771,6 +2937,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4000,6 +4169,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4009,8 +4179,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4114,6 +4284,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4810,6 +4982,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4950,6 +5123,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(false, 0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5018,7 +5204,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5507,6 +5692,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  */
@@ -6084,6 +6337,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6312,6 +6569,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySock, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..4265ebd
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,907 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted;
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;
+
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup package is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconncted backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;
+	Channel* pending_clients;
+	Proxy*   proxy;
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backebd is not tainted it is possible to schedule some other client to this backend
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already send handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+	MyProcPort = chan->client_port;
+	pq_init();
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock);
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has idle backend */
+		Assert(!idle_backend->backend_is_tainted);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Wait until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+	chan->backend_is_ready = false;
+	chan->peer = NULL;
+	if (chan->client_port && peer)
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him */
+		{
+			peer->is_interrupted = true;
+			channel_write(peer, false);
+		}
+		else
+		{
+			backend_reschedule(peer);
+		}
+	}
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: write(chan->backend_socket, buf, size);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all avaialble data. Once write is completed we should try to read more data from source socket.
+ * "sycnhronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is succeffully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		/* detach backend from client */
+		chan->is_interrupted = false;
+		backend_reschedule(chan);
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: read(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+		while (chan->rx_pos - msg_start >= 5) /* have message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]); 
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* should be last message */
+					chan->backend_is_ready = true;
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* message from client */
+						 && chan->buf[msg_start] == 'X')	/* terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* send handshake response to the client */
+					{
+                        /* If we attach new client to existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* skip startup packet */
+						if (backend != NULL) /* if backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend already sends handshake response */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backen is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break;
+		}
+		if (msg_start != 0)
+		{
+			/* has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ * This function is called from proxy thread and access static variabls of backend.c,
+ * so use mutex to prevent race condition.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"host","port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {"localhost",postmaster_port,pool->key.database, pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K')
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		free(chan->buf);
+		free(chan);
+		close(chan->backend_socket);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client, accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan)) {
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2); /* we need events both for clients and backends */
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+					elog(ERROR, "Failed to receive session socket: %m");
+				proxy_add_client(proxy, port);
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/* Delayed deallocation of disconnected channels */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+															 ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxy state
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 0c86a58..bcbde42 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -153,6 +154,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -261,6 +263,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index b080453..dd1cb73 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +763,30 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,7 +797,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
@@ -804,9 +834,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +874,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,19 +884,37 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -895,9 +945,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -910,7 +976,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -927,8 +993,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1330,7 +1396,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 5ab7d3c..53ade20 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4224,6 +4224,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index 525decb..e53b2d1 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index c693977..49f31a7 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 6fe1939..d88a4dd 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -451,6 +451,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 /*
  * Options for enum values stored in other modules
  */
@@ -1226,6 +1234,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2054,6 +2072,41 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections and max_wal_senders */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2101,6 +2154,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4403,6 +4466,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -7815,6 +7888,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index 22da98c..d40a9d2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index acb0154..1e571f1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10095,4 +10095,10 @@
   proargnames => '{rootrelid,relid,parentrelid,isleaf,level}',
   prosrc => 'pg_partition_tree' },
 
+{ oid => '3424', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index e817524..66033a8 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,8 +54,8 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern int StreamServerPort(int family, char *hostName,
-				 unsigned short portNumber, char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen);
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index d6b32c0..3fe7de2 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 570a905..c03e78b 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f9d351f..a4c09e5 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index a40d66e..dd78a71 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..7f7a92a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 039a82e..89f1c1b 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index cb613c8..d4b728e 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 6f9fdb6..455e3e9 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 3aaa8a9..c172e10 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 7abbd01..7566f51 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -15,6 +15,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#2Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#1)
Re: Built-in connection pooler

On Thu, Jan 24, 2019 at 08:14:41PM +0300, Konstantin Knizhnik wrote:

The main differences with pgbouncer&K are that:

1. It is embedded and requires no extra steps for installation and
configurations.
2. It is not single threaded (no bottleneck)
3. It supports all clients (if client needs session semantic, then it will be
implicitly given dedicated backend)

Some performance results (pgbench -S -n):

┌────────────────┬────────┬─────────────┬─────────┬─────────────────────────┐
│ #Connections │ Proxy │ Proxy/SSL │ Direct │ Direct/SSL │
├────────────────┼────────┼─────────────┼─────────┼──────────────┤
│ 1 │ 13752 │ 12396 │ 17443 │ 15762 │
├────────────────┼────────┼─────────────┼─────────┼──────────────┤
│ 10 │ 53415 │ 59615 │ 68334 │ 85885 │
├────────────────┼────────┼─────────────┼─────────┼──────────────┤
│ 1000 │ 60152 │ 20445 │ 60003 │ 24047 │
└────────────────┴────────┴─────────────┴─────────┴──────────────┘

It is nice it is a smaller patch. Please remind me of the performance
advantages of this patch.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#3Dimitri Fontaine
dimitri@citusdata.com
In reply to: Bruce Momjian (#2)
Re: Built-in connection pooler

Hi,

Bruce Momjian <bruce@momjian.us> writes:

It is nice it is a smaller patch. Please remind me of the performance
advantages of this patch.

The patch as it stands is mostly helpful in those situations:

- application server(s) start e.g. 2000 connections at start-up and
then use them depending on user traffic

It's then easy to see that if we would only fork as many backends as
we need, while having accepted the 2000 connections without doing
anything about them, we would be in a much better position than when
we fork 2000 unused backends.

- application is partially compatible with pgbouncer transaction
pooling mode

Then in that case, you would need to run with pgbouncer in session
mode. This happens when the application code is using session level
SQL commands/objects, such as prepared statements, temporary tables,
or session-level GUCs settings.

With the attached patch, if the application sessions profiles are
mixed, then you dynamically get the benefits of transaction pooling
mode for those sessions which are not “tainting” the backend, and
session pooling mode for the others.

It means that it's then possible to find the most often used session
and fix that one for immediate benefits, leaving the rest of the
code alone. If it turns out that 80% of your application sessions
are the same code-path and you can make this one “transaction
pooling” compatible, then you most probably are fixing (up to) 80%
of your connection-related problems in production.

- applications that use a very high number of concurrent sessions

In that case, you can either set your connection pooling the same as
max_connection and see no benefits (and hopefully no regressions
either), or set a lower number of backends serving a very high
number of connections, and have sessions waiting their turn at the
“proxy” stage.

This is a kind of naive Admission Control implementation where it's
better to have active clients in the system wait in line consuming
as few resources as possible. Here, in the proxy. It could be done
with pgbouncer already, this patch gives a stop-gap in PostgreSQL
itself for those use-cases.

It would be mostly useful to do that when you have queries that are
benefiting of parallel workers. In that case, controling the number
of active backend forked at any time to serve user queries allows to
have better use of the parallel workers available.

In other cases, it's important to measure and accept the possible
performance cost of running a proxy server between the client connection
and the PostgreSQL backend process. I believe the numbers shown in the
previous email by Konstantin are about showing the kind of impact you
can see when using the patch in a use-case where it's not meant to be
helping much, if at all.

Regards,
--
dim

#4Michael Paquier
michael@paquier.xyz
In reply to: Dimitri Fontaine (#3)
Re: Built-in connection pooler

On Mon, Jan 28, 2019 at 10:33:06PM +0100, Dimitri Fontaine wrote:

In other cases, it's important to measure and accept the possible
performance cost of running a proxy server between the client connection
and the PostgreSQL backend process. I believe the numbers shown in the
previous email by Konstantin are about showing the kind of impact you
can see when using the patch in a use-case where it's not meant to be
helping much, if at all.

Have you looked at the possibility of having the proxy worker be
spawned as a background worker? I think that we should avoid spawning
any new processes on the backend from the ground as we have a lot more
infrastructures since 9.3. The patch should actually be bigger, the
code is very raw and lacks comments in a lot of areas where the logic
is not so obvious, except perhaps to the patch author.
--
Michael

#5Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Bruce Momjian (#2)
Re: Built-in connection pooler

On 29.01.2019 0:10, Bruce Momjian wrote:

On Thu, Jan 24, 2019 at 08:14:41PM +0300, Konstantin Knizhnik wrote:

The main differences with pgbouncer&K are that:

1. It is embedded and requires no extra steps for installation and
configurations.
2. It is not single threaded (no bottleneck)
3. It supports all clients (if client needs session semantic, then it will be
implicitly given dedicated backend)

Some performance results (pgbench -S -n):

┌────────────────┬────────┬─────────────┬─────────┬─────────────────────────┐
│ #Connections │ Proxy │ Proxy/SSL │ Direct │ Direct/SSL │
├────────────────┼────────┼─────────────┼─────────┼──────────────┤
│ 1 │ 13752 │ 12396 │ 17443 │ 15762 │
├────────────────┼────────┼─────────────┼─────────┼──────────────┤
│ 10 │ 53415 │ 59615 │ 68334 │ 85885 │
├────────────────┼────────┼─────────────┼─────────┼──────────────┤
│ 1000 │ 60152 │ 20445 │ 60003 │ 24047 │
└────────────────┴────────┴─────────────┴─────────┴──────────────┘

It is nice it is a smaller patch. Please remind me of the performance
advantages of this patch.

The primary purpose of pooler is efficient support of large number of
connections and minimizing system resource usage.
But as far as Postgres is not scaling well at SMP system with larger
number of CPU cores (due to many reasons discussed in hackers)
reducing number of concurrently working backends can also significantly
increase performance.

I have not done such testing yet but I am planing to do it as well as
comparison with pgbouncer and Odyssey.
But please notice that this proxy approach is by design slower than my
previous implementation used in PgPRO-EE (based on socket redirection).
At some workloads connections throughout proxy cause up to two times
decrease of performance comparing with dedicated backends.
There is no such problem with old connection pooler implementation which
was always not worser than vanilla. But it doesn't support SSL connections
and requires much more changes in Postgres core.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#6Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Michael Paquier (#4)
Re: Built-in connection pooler

On 29.01.2019 8:14, Michael Paquier wrote:

On Mon, Jan 28, 2019 at 10:33:06PM +0100, Dimitri Fontaine wrote:

In other cases, it's important to measure and accept the possible
performance cost of running a proxy server between the client connection
and the PostgreSQL backend process. I believe the numbers shown in the
previous email by Konstantin are about showing the kind of impact you
can see when using the patch in a use-case where it's not meant to be
helping much, if at all.

Have you looked at the possibility of having the proxy worker be
spawned as a background worker?

Yes, it was my first attempt.
The main reason why I have implemented it in old ways are:
1. I need to know PID of spawned worker. Strange - it is possible to get
PID of dynamic bgworker, but no of static one.
Certainly it is possible� to find a way of passing this PID to
postmaster but it complicates start of worker.
2. I need to pass socket to spawner proxy.� Once again: it can be
implemented also with bgworker but requires more coding (especially
taken in account support of Win32 and FORKEXEC mode).

I think that we should avoid spawning
any new processes on the backend from the ground as we have a lot more
infrastructures since 9.3. The patch should actually be bigger, the
code is very raw and lacks comments in a lot of areas where the logic
is not so obvious, except perhaps to the patch author.

The main reason for publishing this patch was to receive feedbacks and
find places which should be rewritten.
I will add more comments but I will be pleased if you point me which
places are obscure now and require better explanation.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#7Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Bruce Momjian (#2)
3 attachment(s)
Re: Built-in connection pooler

Attached please find results of benchmarking of different connection
poolers.

Hardware configuration:
   Intel(R) Xeon(R) CPU           X5675  @ 3.07GHz
   24 cores (12 physical)
   50 GB RAM

Tests:
     pgbench read-write (scale 1): performance is mostly limited by
disk throughput
     pgbench select-only (scale 1): performance is mostly limited by
efficient utilization of CPU by all workers
     pgbench with YCSB-like workload with Zipf distribution:
performance is mostly limited by lock contention

Participants:
    1. pgbouncer (16 and 32 pool size, transaction level pooling)
    2. Postgres Pro-EE connection poller: redirection of client
connection to poll workers and maintaining of session contexts.
        16 and 32 connection pool size (number of worker backend).
    3. Built-in proxy connection pooler: implementation proposed in
this thread.
        16/1 and 16/2 specifies number of worker backends per proxy and
number of proxies, total number of backends is multiplication of this
values.
    4. Yandex Odyssey (32/2 and 64/4 configurations specifies number of
backends and Odyssey threads).
    5. Vanilla Postgres (marked at diagram as "12devel-master/2fadf24
POOL=none")

In all cases except 2) master branch of Postgres is used.
Client (pgbench), pooler and postgres are laucnhed at the same host.
Communication is though loopback interface (host=localhost).
We have tried to find the optimal parameters for each pooler.
Three graphics attached to the mail illustrate three different test cases.

Few comments about this results:
- PgPro EE pooler shows the best results in all cases except tpc-b like.
In this case proxy approach is more efficient because more flexible job
schedule between workers
  (in EE sessions are scattered between worker backends at connect
time, while proxy chooses least loaded backend for each transaction).
- pgbouncer is not able to scale well because of its single-threaded
architecture. Certainly it is possible to spawn several instances of
pgbouncer and scatter
  workload between them. But we have not did it.
- Vanilla Postgres demonstrates significant degradation of performance
for large number of active connections on all workloads except read-only.
- Despite to the fact that Odyssey is new player (or may be because of
it), Yandex pooler doesn't demonstrate good results. It is the only
pooler which also cause degrade of performance with increasing number of
connections. May be it is caused by memory leaks, because it memory
footprint is also actively increased during test.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

tpc-b.pngimage/png; name=tpc-b.pngDownload
�PNG


IHDR������sBIT|d�	pHYs���+tEXtSoftwarewww.inkscape.org��< IDATx���y|������L����F!�%vk���������t�7����������VUKm�F�J,!��-�d��u���i�KH$����2s����g�d�;���yB�$I�$I�To(k��$I�$I��%@I�$I��zF��$I�$I��%I�$I��J�$I�$�32�$I�$I�gd(I�$I�T��P�$I�$����$I�$IR=#@I�$I��zF��$I�$I��%I�$I��J�$I�$�32�$I�$I�gd(I�$I�T��P�$I�$����$I�$IR=#@I�$I��zF��$I�$I��~m7@���3p�@�����iSm7�N��������OQQ�C����/r��y:Dvv6����Z��1c���mff&VVV�^��Q�F=�~�m����W��u��������h4�r�BA�F�h��=����>���C������L�j5���������}
.\`��}����h������ul���t���_������sjSXX'N$66��S��l�2���������I��%����x��7Y�fMm7��|����h����z
�i�����W����k���|���5�����W^!''����:��k�bbb���)����K�.���;v����)lllh�������777^}�Ulllptt�����W�����'�|��i�t�INNf��Q������������k��1c-[�d��������]���L�2���[chhh���]����CY�~���2����1��^z�����:����4
��I�&���1b�����z��Tm�$I���y�ptt��fT�v���~��������c��y��}�K�,B�`�ahhX��gee	@�Z�J!���� ����u?wcmm-�y��J3443f��PVRR">��C������:u�$6l(>\a��g�
WWW��U+Q\\|�v,[�L�������g�_�����d��V�Eff�]���u.w��E�?^w?##C��j�9'�������{�C�K�*#�JR%/^���IJJ�����K�r��)�����{7��5�g�������+���Q#T*...��=!�����[�v�Z�{�=lmm133c��A$&&����q#>>>XZZbgg���9}������XF����=*�
///V�\Y�����<��sXZZ�V�2d�.]�i��DDD0}�tZ�jh�$�N��{��+Wx��g���D__�����t�R��G����������	
4��w��p�7?~<?��#[�n�����s�����kW�3g����Nhh(�-�G���U+

�����7v��mt��
sss,--1bD�!���P���9t��]~�7�|�r|||���?��
{���u���������/���{|������W��������	�|�3�<s�~�o�N���h��-}��������m�6-ZD���	

�C��T*\]]+��O������;w���V���>�|�	J����y�f���X�t)�;w������W����^������1b)))w�F���s�2e
����������m[�����������������u����������{����������N���M�����������T*qssc��9�!u///N�8��M�pww�����j�L����suK��g�}�V�Z��w����X�H����T7����������N������q��	���;�o��V�2i�$aoo/���/������k��������u���j���.���+bbbDpp��������B�.}}}1�|q��5q��91v�X���,JJJ�B���oooq��i/>��s�T*Exx�B���m[��gOq��Qq�����/<<<Dqq�8z�����������B0@6L!Daa����>>>���S"))I,\�P(
�r�J!��W��5��i�����������?+}-���D�����Dxx�HMM�����.M�>]8::���R1w�\abb"z��)<(��;'��'���ELL�B������'f��!��s�D��=���������c���_�8q����������ys�h�"����{���{��"$$D\�xQL�4I(�J%���5m��B���n�&L]�tBT�LOO�g����o�����W�B1w�\aee%���':$bbb��������HMM��G����s�^��(���J�P������R�����>+s��e��	===q���;>���S!!!���B!V�X!�_�.N�:%
$��o/���s��w��[7�h�"z�����[i�z��)<<<Dhh�HIIk��FFFb���B!�?.Z�n-,���u������8u�������l�2��}{�{Fdd����T�%�>���
C��4�|�I��v��%���[��K�.b��������b����1b����B���
@�������g��������Ll�����F����b��yB!v��!q���6QQQ���^qqq���L��o�n�*V��~~~�G�B!�9"��y;,,,���~���g�����|�M���,JKKue���?B���_
@�������b��9B!�*:u�Ta?���w
FoUXX(�"�������;�F��m�@l��]W���+LMM���3�
!��U�*���&7�����c���`^^��4i�P�T�����Ot�����_�<�����������y�����Y������]�d�����0��l�2ahhX!�����NU9����r��������X�n]��	&ggg��.]��	&T����H���U��������>+����.���!�@$���v�Z����?'N�`��$''SVVFRR��������p������(�u��������?��}��������n�a���w�^����S*�dee��FZ�l�{���'.���Y��/�T*i��}��v���v��
e;v��V(XYYUixj���,X���� ���K�������������%��������p����T*���4���((( 00��_~�9s�`eeu��o�455�������{����T*��=���5���5��E�����PZZJBBj���~�M����h�;�(99����}ggg�5k�������-,,$$$�)S��r�J�w�^i�III���```�+������t���q�����O�.]P��@���[�������@�����W�XAaa!FFF�>������*}����v������_|AZZ666Uj�$���$U����&L����ky������D__��7�[�
���Z�&<<�~����73w�\�������x��)((�O�>DGG3r�H7n�R�����������WZZ�����W|[���$##�B������~xyy�������	`��5������*�
33�
�311!77�f~&''Ta��C��q��[YXX�|�r/^���3c�����_�d�|����MMMk,���J����JU_������n	===\]]i��y�����3��'B�{�'$$��#G�����k�/(�n.������Q���������,,,,*�9;;s��I���;6m��g�}���.d��aU:�o���KZZZ�����DAVV��������,::����|�r5j��%Kh��I��&I7��$=���HV�X���K�4i��|��
U�����i��1m�4����9s&�&M������s��a�������=2�f��=���������0����3999���ajj�+OJJ���������I��:u*YYY�]����gWx�������
A`ff��g��������l��n�R�d��	L�0�#G��x�b|||������C���B���+��fdd��M�����
�gff>p��//�
[e�����U���s'�����x�����i�1b�����k���^����Dn{\�VW�:;::��������s��5�{�=��CBBB����ly���u�T*��������]O��������k�.��������{�6J�+�H�]�k���o�7�=z����S\\|����s'����~yfmYY�������_���@�Fooo���Q����cll��c��yL:t���\YYAAA�����9r$FFF����dgg�2>o��Gll�nh����;vPXX��&;;��s��5k�N:w�����J\\��������m�������@tt�n�������
���d���n=�b��Zy���)St���233y������?P�TU�[���m�6���n�RQ�������
�����U�t�5j����JAA���}���w��m[�����cG������v���uQ���B���u��o���	���111�_�^R��=��t�5"--��?��n��U���U�V4h�����3u�T�����u+c�������������hzzz����={��]�RZZ����i��9����-U1u�T
��#G����{����������K���I�8�<�����G��o��R���#+W�D�T��/ThC�N�1bS�N%!!;;;V�^Mrr2�|��C���v�b��E���///���7n.d��Izb����;w.��_������aC]���i���y3���L�8!�}���������_?�������}������>������g��@����E��������������f�������3f��K/��5�D���U�s4k�,���C��}���7����
60|��*?�A�e�
D��m>|8���$&&�~�zLMM��}�m�������t�AAA�"..�+VT��w�=zPVVFhh��w\\\�K/�DDD���'??�%K���K\\\puu}�s��s�f�������|�������������9p����]�

E����G�N��������R���u��$I:-Z� ??���O��I������%  @7���������;������d��E������D.^���!CHJJ�{����eeeaooO�^�pss�O�>;v���.^�H���Y�x�n]���;�z�5k�7�|C�v����!//�=z0|�p���9p�W�\a��|��7����[s��y���4hyyy4m�T7����O�V�9x� ����e���&����PPP��*�qJNN�S�N�h�����]�v\�z��/��kW��%���Y�bK�,�0�+44��{�n�l�Bpp0��5c��e���Z��Q����g���\�|�,Y��VTTDII	w�A��RI��
m����
_�5[�l!((777�-[�;�F���ukN�:���'i��-�f�"++�������Q�F���"� %%���t�:###133�[�n��������������XVVFff&����A�������O�~�*.�URR]�v��8T.11�=z��y�{�cgg����qpp�����?�J�/���?��{��$??}}�
����%���~���P��7!�n�u��t�����0BBB���#  �o��F7=�A��;��7���...=z���wckk�?�@�n�t������U+��k�+[�p!���|��g�v$�Q\�G����2$I����/����'o[�����b��9�N���[�X���^{���/W9��+RSSquue��E���KRM�s%Iz�������_���?2o���n���7nm��a������6m�4��i��q�j�)R=!�J��Hi4����#k�������m���t���t/zzz�[���={��!Cj�IU�u�V���			�C��##��%I�$I��9,I�$I�T��!`I�����x�GoG��bH�!X��������aof� �A(J�e_����*f�f��f��M9Kh\(�6��v�]+�"I�TU�P��z'&#�^������+YW��cg���9�t���(J�\�����������w�d������l������0�7b���|����<,I���&�J�T��]#%?��<������������4�i�T��!h���CY}z5���L�1�B=��gr���@~I>�5#��X�r"�$Iu��$�����Iw;63���\����2����E�B��H���Kde����1�7��N������mob`BQi)y)�����qI�$�/J�To]L���
c�m�o��\����u"�&��I>C\V�����k�JO��-��d�I�������et����=���Bv_��Fh(,-���%I�j�%I���������������^
��s7V
_���=�h���/����~.~����]X��SI�x��s���s��Q�o�<#7�`~�K�I�$�6�"IR��������l�U��L<Ilf,�v}gg������`��y_������h\�.����Fhx��{�����*=Um�$I�}�=��$�;��Gbn"CW����?��v-������H�O���?�P(�����7>Ou���x��6@�/�g���4�lLfa&+������$I�2X����R������������{���Daa!AAA���������
4 ??��� RSSi���������������������~����/��HuCFaVFV��gfb��@��1x�r�st�J�$=��T%@��0a�z����
��w/����=___6n���;h��!/��������/��"*�����Z;&I�Ue���H]!�P*�2��$��#@�J:w����?
�;;;
��%%%���`nnNII	��_�U�Vx{{s���
�
�ooo��?_G#I�$I��s�����!AAAt���^�z�n�:��?Ozz:O=�]`������d\]]��� ''!�����AW.I�$I��!@�������u+������p��q��������_g��}�����o_6o����
&&&���h'�������������JfNJ�$I��"@�J���X�v-��7�C���s��1z�h������"88���|���x��W())������D\\\t������!I�$IR�#�JUr��!���qpp !!������ptt��������R�����={��o�>


��k>>>l���C�QPP@pp��\�j���+1}����!I�T��20R�>|����
e}��EOO����������;w������"BBB�������6m����OHH4k�//��8I�`���L�6�����vS$I�j�����Y�xqm7C��jr9�2G
�0�iTm7E��z,  �q����>�������f�<<<����/���Y��J7�����qc�u
�v��������vS�Kvi6C
�	�����������VV��](=������j�%O������h��Qm7�����q��U�u666DFFV{�
��F��n�����?��i�����Z�8���w����:0y�d�N�Z�uW���W����o+?��u)M�L���p@@�����3������0a���K���I�����t�R�?^�M�u~~~�d?2	D��T*FFF���:!!7�F�-�
����G��<���?���2�y(F�FLl?kck����g��E�mll�����fTI`T`m7�Jlmm�����f<���{��H�R����/�kX���u������rIK���6)��.#Q�)�i�O�{��*K:������s6�lm7�Rfff���<.��Z�M����5~�����'��*�+��X�1�d]��m���_��BLF��G#4deq,��E��I/H'�z�y�����v=������oO�F��@\���ih��Y~����%::vd��Y=������jI���#�?�*�H�`��������p�\���gk���$���!l������9w���~�>�}�	b���ts�������w���#���zlm^��8����
J0�c���!*=zJ��X�.g^��]v=����?F��B���s�
�y���L�8�_N���^������S�O�������"����6������bEqm7A������d��}|��kX���w�8L�N;�v8�;�n�:x����`b`����X��e|;�[~����D[O�>��\�7�����!��G~,EEE��7�K��z,��X���m IDATY���+|�����Q(�)���u"�N�z����y��f��{��]��GA���N��oG�k��n5���m\�qwL�0�7b��T.�~��:����3i�}~���M��8�J������t���7/l~�Z�
�
$0*���7�$|	���]W^����\��x�eff���Y���Wd�T%�wz]w�b�E\�*�ee���B��0ii4�����l���	��o|�{NTZ���5��������05?��g��*b�3/�\��X3b
>��+��;8�����[]� 25���w�1r#ol{3{�����k�\��2������#9�~�y����n�D?��Y{g���S�~�4�f�U��B��Z>�s-���K�e��|�����;L����+�|��������������rrr����Q\VLLF�/r1�"����?����|
K160��M/`b`���
6�6tm�[���hQTTT�Mx������Ww���D2�H��@����N����o�v����g�Mw��|��{<<<������/�v}��'�SPR�FhP*����WCYpd��l{����g��+f������v���=>f���w}n���m��>��"S#	�e���L�2s|����O�{*�J�100�������yg�;���}�v�����}�w�O���L<���/��T�����ul�+f���X�[]����6��K4%\��Za������K9G|N<����Y��~�<�/;/��4C_���3�3
9�z���P�b�h��j#5�n�tw�No��8[8W�]�6�cF�4�m^��+�nX|t1��Tk�2���`5i���n��]���_�^����"�3��h4XXX������1����T*����|�%o�|���F����.�[{v-���m�6��qL�0K���0�I�bv�*b�����h��=�{X��-����Qg����a���3u/J���v-ii���'S�)���#_f������%:8t��k����s������g��}���c	�������"S#y����N�f^�ymV���������r1�"�B��taS�o�<�t}����kt�����tz�����"���t�S�S�(���������r�e�2���u���xY4���k��4���k\�.�+�������p^��2N&�d_�>�8�o�xsz5�E/�^�j����������<|/RNq9E7zUm�mP�$>���Xs�9�������T��{~�|-Z�`����^�tg2���.]Z�M�Ttz4���
[�o���IW����r����3b�:t�@x|8/{����=������;/�d��e�����v��W_��~Tai!ii�>v2�$+#V�����f,s������}�}��������}�m���^3�+��@�v_����_�R�%���m����)&v&>8mLlP���{�	���3vs�������Xn=�zLc���5�Au7���,���z������x��S��������#���	%�b3c�N�f[�6V�\A���RM�.�?<>}�>zJ=:9ub����]����S����#:�N�w�
I����5g�����bgjG�&=���~.~8�;>�>���I�M�z�u�s�����z�u�/s:�4��LS��X�����
���.�v}��\P7Q�aZ�W���L�R�|��S��������������7�'44���l����L�dr�dy�������x{��x�x����010���on����K,
[T�c�]\V�����+)(-`l���L3�fU��a>HM
L	p �=���$r8�x���a���s���z��C�
��K/�X6���	.j���������|y�KF�M�������`����Ms.���LQY�~����9F�F��U����X4���[c��(.�������h��������w-����xX{�i����st�z�R����2GW��������R���};������o��$�ao�^6Dn��obed���s��=WAR^R��]BN������HFa�f�4�h���NN8�;�f�7q2w����F��p>�s�s-��S�3����_��K}v*�K�-���e��dXG2d���n�mf��"������a���j��U�VQXX���qti��������C���G��TS��CO�W��23	�R#�2e�����y�N<�������F&��hJ���O��X���s�l9��C���Q�:�����=�����������X�d^�J�N%�b����e�q9�2���4�l\!(,���������0�CZ��`�K�(��)m���W�G7�n�hJ���h�nBiY)y%yd�f0k�,2
2H�O%� ]�Q�����684������|���tRU������h.�]�%a\������6���w����i����C=�J�!�B��6T*J�4lC��mx��!�z;�w���r�s(et��e�}�>ztr�DS��8�;����.��7������4�0b����z��{g���/�����m=�����+�r���~Z|t1��o"�H;,�������_a�o���c��ex��8�9��|�������V����u2ljS�us=<<P�T���^x���W�y�f����i�Ya��
��x[�e��5���n���Cf��������!�N����o����5��jJ��c}cT�*]�QW���������N�Ntw�^�v�y��ei��]��BLF�/����e��UjlMm�6�&)7���d�
��5�����-�}�Gww��k9'd?
������!� ���4�
�H�O��N�K!*-���T��#)/	����i7�n������j�n�4S:N��MS����C�.S(|���
s��]�,ouy�NN�tAY�=���,�n�����.���g����	����>et��w�"["�"(.��*����N>���3D$Ep6�,'O���O�&4�m��B{s{�,�H/H'-?�U�W���oI�M����F���N��^UGsG�,�p0s��i�����,�GO�u����"�Knm������g��
�Y���#Gb`�h�c�K]�B����s�M�fQ�M9����X[��a��6������lB�Bu���]��.�����
h`�oG�J_{v-���'�8���\��134��{?r�s	����������R����;��n�U�E^���R��M����������Fm�~���H���L����l_���n����<h������{���������{>e�~m �\��t�nuUQY�R�q&�g��p:I�ee���%���U�V<��Y�4lS�<�8��_p����@���z�'!���{zA:6�6��HU�����������c7��7Q7�C�5H�uT����Y9===F���m�X�bc����r
�2.���Zt����f�r=�:��8�z7��j4f�f�L<+c+�
�u���������Fh����-�Q��O=���--�Z��A+z4���>/����mK�Fr"��CO|��m���x��x������w���Y]���ReYa)'k'�m��������~n������g���1??B^!(&��{�=�3�f�L�gj=�
�2.�z�"�"8�|���X<�=h��%m��e��i��.�.=���iC�6���K%����aq�b
J
�-�%�0�"M�����`���T�
�f��}���/�7�����
���r�����in����Fx�������J�J9�rV;'���23�.����/�Gm��Be�
�
��W���S��W�V���
�*d��s��Z���.c��z�LLt�K�K&�(##�D�G��Rb2b��q�+YWh`����@����W}^��]�'z)��-YS���?����J�%7����yi'3����}�3�oO7�F������~�u����,�##/���4A�����1&��|������
��)U0	��O=X�o�Xk���w�O������@�[��!us�'���K���������z�[>pP�P\R���!��
�r�0v5f����7N�p�e(�($�*�-Y[����=�j�/��N��C�%cr��*gk����,�
�tA��S+�(����P77����F�{^lf��e�QPR@aia���e�3/����FhBP\V���#���0�7���?1���*��d����y�����u��<#���2k�,>����>cH�!���S�3;d6+����~���8�p������AH<A�(�i���o�o'W�����h��E@���m��4`
`v���o�ew���k�^�_d�2���j0��$��4���x���=��l:��s�=�t��p���=�����[�k`t^��7�Y��2���	:D�s -�Z2��B^�c�IU)J���*\���d	�/lX��:\l\�e���5G�A�w��$������4n���
�/��~ut|��HT�	N�Q���z�Ju_�\TTM�����S�O���S��q��|��f����#t�`^I����U_bn"G��r,��WC9|�0�������!n���7���^�)�l:���N��f��	���������h{��.����6
���kP2���`UW��������g���L�uK�.%�����3���b�����nxXq���������\w����A���0o�<^i/��y��k��T��n�G-::KK�j�4���$�z*�x����x�B�U	��
��a���	������5���)e����;���c\���������F�7X?r}���x �qK]�����
��?�8��mw�`mQ�����$�:��f�����}����f�����-��?1��lv��M�y_6����@.���@���?2��������	kH�M�a��S��<�=�v��5r(u!�I��S'lm�=D���m��k���f�~}I7�'����6>��)�=�_�m�,���>z�(�z5��WbofOw���6��C������\�3h���7_�`������y������YO�l�j�����:�|L�7!po ��D�^4f2������F�k1��'��mJ7���p`�h����Y�jx�
Eh'���ReI��=j�>��!V����&�&���})����������k��v����s��i��wJ����$>��%e%�|e���)�����mA��	��p'����Z�������hj���������_n_��vXZZ>V�f(�u��c��O>��$11����h��M�mb��l0�F���g)1/���+c+����	�����'�c��1h�u�����d���4���-��yT�$I5*�$����s)���wkF��-cj`z�s��wn��V��vN_y`.������	8�@0�
�0��H����`���5��(U�)`�����h��7l������s���������0v��`/�L6qc����~P�T3��D~��;����4�$I�n&&�-R�6R����@���
���T�����h{���+�w�.���?�'o���0���HR5�`Ug��?������a�2!�(���~���p>�<�������2�|l���_%�1�T\w��@�!kl1�1���0n�8�_X��u�x��gt4k4
kd���]����'*��@_Sj*X�!""�Z-��k��d?
��uwy��J�|����9���G!))���8||,���n_��;~�xm7�R��D��t����V�����{�n�-�����k��F�q	3��@��p��A�7����F�5#�� RT)�1\�D��z���G�������NQQ�}SX���kd������*�qMVVVm7��v��RRRj�O���x]&�T3��������f�+2���jp�I)�V���1�O�w�;n�n������-�>�����8is��_#K#��>�������~\�y	�C��gN��h���#�y�===�y�lll���_�����c������a�:u��5�m��4iRW��?��5kV-���ja��U�WU{����
��S��nF�"��������i��d=V�\A��X��j���/�nD��`����J��C����r6�#��bx�����Gc��\���1~3�����z�)BCCY�|9�������u��5V��%��k���y��|+�[��q#��=�(=�d3"�D�����^��]6����e���������L���r6�&��=�)��&�2|�pun�����������iC�6m��y3EEE����~j�gP�$I��t�P�������c����V��abkB�a��7�����
����`���Q�G����wm�����0�5�k�z�z�}�-;���_����o����WYY���,^�@;T0n��G��$I��/����-�xL����T�W$�
�XG�v*w ��N0����Ko��\� ������������O@��DS�E������h�]��A�������^x;;;���pvv���_���o��]�d�f=z�����n�-""�����n�-**J�	�����������[33����I {���z-��/C������?�����,����
_E��l���S�����{fjW5�2��J=��`��
Dn�$�j�7���~�>���oBBB����2dH�,�A�.]�205���,�p��JJJ�\��0���O^^yj��6�������<(p/�2���Lfb����4d��%��qN�Y�? n`���X��@��(�x��������O�Y)?*~�8�,}s��R������l�����(t��Bf�<y-�v=��j�M^���D�T����aN�ndXG��,�9��O~�w.r������W�c�����+l����V����`����\��^�V�?=�&E7>�����2e
�v�b���>��3�dp��Y�7�������\����y5�\��<~���R��JD�f,:}�+Wt����m�������5��c7���sk��*��/)E�&c�5e�P�_��T�e.c�j*v�S��|�_�U�<�\�0�=��t�v
�1x7��S�T<�s���j�*����������$I�������QdeE��%B��oo��]C������_g��-@�u��1�|�6=�o}��
JU�@�x��=8Qr���$&M����'�(5���L�PwV�	\U]e��2���n\�����[�o����+�_G{�r��Cd7*~-4�����7f��-����<���X[[���@{]L9@)I��(����t����>8%%������Hqsc�V{M�M�K�=�����C@�L|2�����������_���c��p��#X����&��������X���h�)��@hJ4DeD�R�Kp�W�����������/��K�m����1f��?��?�L����{��hQ�
����������^���Z���~��j��o�5q���t�>��t���w���.����������	���y5u-�,��=�����~.�	����o������6`:�-�h{�j�.����
��23�2	��`UW����c(Cy��=�4���"�"����b���A����c^��B���0��;�D�3��k�6������@(t������q�F.]������91����b!���+�]��x���L8m�]�6a[TD����[��d�x�������<|(��k^MeG�=v�m���Mh���_��@/n�Kw3�^������A���
�����}{�\dXG��,�9��	M0�� �����#aX�Ya�[�J3�a�Ot��x�u[]������f��6�lX(A���������?0v��`���I�H		��}���!�5m�0��Z��
���z$�������;�F ��vhk����^����E<�\�up	7��-h>���R�����n\/^��eK�}%���g����=�b����JE3���L�y@��m\�j����"=2���j�nh���?o��M �� IDAT��E����x�I�M39\p�	���jF3�����G��i��q3�'��P�??`�
�AF$�BAd��{%�~(�������(��20�������b�>^�����J1,�i�1�i�7�6���_����	�ShR�6����u*���@���w��Q���nz�H�N(AP�ZT@�&����k�zE��K�R��(AE@Q!4��H
���6u~��-)���g>�gv��}wvwv�y�9�9�Uf8�*�����/�	����!������C��F�5������'b��F������jl���C����}�[�FH�f�.v�=zT��<�����X���U\������|��cUa��;���6\6hC��(�V�_p�i����+=��>U�f �HBn����HL�d$&�H����)��+�V�
������&�z��H��>z8�CC�@������;�1'�u/D.�y�
�8�r����Hc����C�F�.Cf�O�Q��2��#xQy6���Oa���Y��j�\�������LF�j'��Z/V�V� ��c�1�������*c�X�vC�8��������AH_r�NVR������p`/r����L���_�4BR����c�}]�x���!�O0�)��z=>@B6���8�G,��6@_�cKSNw��MF���������R��q9O� ��*�'�p�a#�6������vR�|$x�b!
�t���i-�
Q2�e�������rx�a�]^F�}}�F�]���W��hO����En�E9��<4�����t)B���y}�n%\Ab�u���u�n�"D�Z�J�t
���mLPjF|^a��nmw�c����`��(C�+g�������r4x�������1�������p������rGd�R��#Y@0��r�@�A~s
b����O����m0�
B��?qL��1��u@��3��/09�_GH�SHlbU�-Q�V����������6Xl�Ja�*`������P��M�L�����g����6"�-@'��mF�Y�=������og���,^��a���V?���B:LGjUm�p]S��/!i���OIDnK�;, 7���3+��5J��I@~��T�����Fnf�I,�����g���MHlT�ZR�i}U����
`r��B\v�!�C<�O$^A�����Uw�"~�NGc�0�
�b1}��G��]��|=�Xo_D������)��C�0�����P{������g���K������F���V1m�4&���V��6������XN�	`�Q@FR,�v1sm�e�=X;�����Yc������s��;p�� wvG��>��5FX]c]����k�)�wsG����u�~w�Q�YDIY9����Anvw ��W�����P����
b����a�!���y��U_8 ����t�O�cp�P�D\� 7���S?��B�ZDM�S���
zB5�"
�P�cw�;��u���@b=G��|��;����B$��mdr�K��� ;;�'N��u��Z)�ULv6L����L�����I���..t!�k�������K��Z����<��\8��`�L���C�`�4(,�3�V��!7s��W������m�����B~}�[x�����u��]7�.9�9�d}Q��E����R�qS�<OoC�Y���������1 5��Rh!����46o!��G+��U��[u�>�;���5��� .�O��#���]���&B��!?�������V���VH(�
���|(
����N4�?!�O"��'�G����F&w���,�($T���g#����nG<;�����k�z�bCm`#�V
kU3s&�l�����QT:��K���{����z����r��0q"���?.�I�`�l���-�^��k��Gpww'::�[v�������J�������9???���������Q���#�@]�@��j�7W�u��#l�a�"w�k)���xb;�#����0�B@C�O���U��4���SK<������v3�f2
����"��[����}�~ly�O��U�]���1����Hz��HLfM�����?R��~�6	uD���"����2�����Q�G����J�.�]k(��
x?b�;B�4n|R�k!�������r�2<9
��>���x{��H��6hC���[PZ�0`X^/2��v4y.�ymde�6�~���[�������98�Z��W#}��7\�:�w�q�����{�0d�-�6��j���$33���/s��a233IOO���___	j�o��~����di.��cx�!D�����0g�T�#w�&�C��_^�{��!��j*��cR��m_F��<S�-�N���>]�� ��z#�!�����FT��x�6�5��!qC�"d�%b
|�zq����V|��2�@T�q�G��.�D��bYY�����O������e?
��7!��-a��l�@~�B�GTM���`mp��mE���mN��u�q�/A.Sw������@j���T�y		d=�1G'�S����Gw>�����y4����.5���0��=0c���KH�GL��\\\

%44��HC�^EQ���!==]�N�>Mzz:���xxx���%3���\9{W?W.������[�t���n�a��H/��C�)5����D����Su�Q_��A���K�,����|$1]UH�����������Ow���c�}�sH�5��'�6o����� 9��GH�d�_%C�Q�T���qsBZ��4�9���/Eb��#.WK�!]�
�����j��?�N�E"�X���")�:��
*.""��s86�6h��jpv6�#���te�N���(�#������K�5���d�~8O�j��5o/�+V�k������c�Q�Q|��4��=���i9���26m���c����������8��	��������6m�9�G��U�1��C���"7e�����0j����M%|��J�k�E%(Hm�sH�_uE1(9
���W����t��D�B���	V������xx��k<���0$#t�w������}0q	g"����=�����r�2e����H\��7D�Y�5�V��Xz�<�7��\&�!b���T�.��
������@+���������Y�X������@I	��C�s���<�k��?V�3�~^����{�e���Y����9�Wa�����.]�8^whgg����g{{{bcc�����%>�H�o���_�}�Z��'�4zN��X��n$4�O(�J���9����jr��I��X�`��������	0�;�{htI�r=������9�E��;G!}?#9&_Eb�j�~�Ro�Z�����(p?�riC���[+�	��� Fc
�q	/@�U�V�o��(o���T�x�����4n&�T��6h��Z���NN��K;���S�v�o�����c��^n���O4���i�{�1>96�1G�	��[�x�����^�?z��!xzZ�d�"�"��i���O�0a�e������������`UY��B�����7��h�?D ��>��0,��U9��L�K��Gm9Q��6���C��")t� �X*��v���~��-�2�{q��G�'W��X]/ �
j]�GH��	`@�!�y1������;�|-]kBrN�6��GC��zS�jU���AX�w+l�}���k�@�}-9�p�'�������]����;��%������\tKb�p|��K��z����Q:t�������I���;�:ub���d�d�Za!c���[�R��(
����� ��3>���CT4<��
���O�a�W��
�>�X&�5x�#�\����X�Th���O_���P�������]��h3���G���t���D3�K"4x
��&a(9x���������t=k�S�n.�r�����������[`d���hC�����������lf3��#�%�x��O4�-�Y9|%���71>y������ww)���{w������Oy���G��uH�w���uQO�����vF�;�	)tD�/�A�����|If��o����4i`�G{4B��7����m#==�a���=A�$����p�B^z��*C'rk��H��(�*���V��O�PD#����4�V�![3�Q[���������1�fa���������.Er�_F�v����:��
6X��������~������C�~�x��g����>��e�*��h��=��o���w,Z��'N��iS&L����#M^�|�r���N�>M��-y���Lb�l�=����nI%����)L�	'&�>�Q�G�����r�
������|������U�������%K�0n�8����=����{�L9.����*f_���F1�U:������qBu"��Z,
F��!U��b�z3>>>���1z�h�#V�X�����������+���El����!����i4�:�L������Qu@tJ��ZS�Xw
���!�wr
�#a��d�[�|�d���cQ��d�
��$���o�o��6�W_}�.0s�L���X�n{������L�2�����TZ�����{���'��� ))��c�R\\��q��Y�x1�'O�������b��m2����s�]7�nk�*�*K���U>L0/��$�X���G3����+!wt.�>��K���$'���x���^:���|_g���f0�b�7J���C���X�t)����y�w��]��;���S�����j���iN��ii6R��b��D|�j��;V���M@��-/�`I	�����i0�?��;�q�F���}�h�����5�s�[��XD���s�Wkyl*�#�*X�d!�A��Cb0Tw�gx����I��yi����$�74|��M|�X!�t��E)//������U%--MQE�8q�2`����s���*��w�I���>�4k�L)//W����F�)O<���>qqq�w�q��|��W�����~�Z!!!�"�Z2�Le�2_����<<�a��n�)���U�o��[��t��1x�y)�r��}�I�$e�2����?���=[9y�d����lS�)��{x���/W�3�w����GE��(J'EQ�)���(�FQ{EQ�E	U�NEQ���^?(����0KE�d�|�Z���D	U�WQEQE	S�VEQ)�2QQ��EyO��E9�(�%EQ�*���~%$$Dyn�6e@Q���C��w�Q�P%FQ�EQRe��(���8fEgEQv[`����L��/�l��c��)��/���a������wo���UZW�Z�V�5q54j$���|����j����4����s���-1b��%��j�p����V�>|8?�0999xy�������|�'�x�xv/����+x��w���&��H?���q5���������-Z�`��Q�Z������V5F���',xt�G��]M�V�6���HB��[�@�_t}-%F�;����K<T��H�zJ(�c��$�������aHS��-.#iW�0����{|I������ �W���HN��c�(���u��{z�	IG�,5���]`����6���U@www}���>����(}����,����={6�w�����0j�(N��eDD��8��UX�>(���'�����?`
`�*�jpd�Rr��������1�bb��x;[{me��3��}*��(mN���G�-R/���3��p�K\"�<Np;���F<��"� ��DDD0n�8������L����`)U�3�^�U��@���
AU0�F���r:"<3��s��(G��/�b$F.��BC)wt$27�(]��7��Q�"�6��Z��VI+"!!�
6�m�6�U0++�;v���B�.]���?;v,[�la��y������g2�����CQQQ����R*�c�pm\��v��:@=������aE����M��!��fd6���^������o�*d6��GoI��^J(�4��b
Sp��b�)���4EA!�l�)'�L�(#�J(!�<�(�@�Q�N�Q�=�h�PF���YLe((x�E0���K��Y�tN�N�v�5<Waq������7$�b3]S����LuRr�Y����R��
�	v��b���1q�<C��a$GG����3Zh�x����c�1oZ��!9����]���%d��������Tc%2yQ�!I��Y���N�jX\\��)SX�f
���7��-[�WWW��\111L�6��S�����$O�j�stt������\YYYt����� �����(..&((��s��-!7n$##�������?%%��233�5�e����*%��h_Lj�T.r����P[H�[����Ga�S��C�(((�]�v�����&''SXXHQ�"�]���iN����V����A�}	�8�.�����(tw�N�k0^xa�='w�$/'�~1������q�����%����*}��-f�4�iQ��-kH)H�c���y��EY�Y�����I�O%�{*�d�E�/���I�S>��RY���]����7�v���?s�S��1�8I�������v~DNNS�N1���@�Z���;m$cLCZ�����n[������%��\'��I����<:<�����������d�u�bbb1b������LKc��i<��Szhm���?�����_E�B���?����6��-J���W�_�6�7���k��H�`��������w��^"�~N��`�?vG�<�l��}�*�U��1O0�{���.�G�%�����}���;w��;wV��j��2�>|8O=�����c�����[*��$
ipp�����������\\\����*4�i��-Z��Rr����
�����~ooo]\�=NL��7i���������A\�
o����^K�&g�����<��QK�}�~��h�b���h�Z"""+
c�fM��r6K��Qa���h��D8��wi�m��?W�wNF��q�[�n2�S����qW��mcx�pz��-��gI+N�8�� �\���5�k���vR�S9�I���l�E��*h���1����P�����{]�|�x&fNd��(��2�$����?��K�������,�	��Pt��lGO:Jn��@0�lG�w)���r���Gi~��	��8)))>�!C�0k�,���a��"��i�=���?���������j��A����^��j+.M\�"X�'\����D�����Y�/�z�-��@�����������s�3���8�;IVqk��U���K���������(��\�w�JKK4h���l���
����"v��E���	
5�9p�QQQl����}�����/���/�����>��_�������?�GyD���/���E��|�2vv�K��������t��?�`�����j]�%X�����O~"��25���SnG��Y�&��8�������]XR3f�c�AX�<��3���
��
�'3�0M�����s������Sc�����i�k�G���������.e�<Z�{,c�_)
�������%Kh��%����r�DY��
��R�1�q���I$���h���vDM;�I$1�T+z���[O���:O<�M6��	�����A �	���9��x����
	�w���+8z�(			�x����=��~�x�
V�XQ��

[�n�O�>7�8�.8�)~�'<�xr��{p}����d�$�^�N��`��:����p?���~����	%T����5�{\����	�<S�;K�*K���;�}������W"vvv>��S�R�+�]RR��3����k��8::2v�X-Z�'h���,X�����OOO���:t(s��!;;F�d�|����?K�ZU�
���������[�JPi����"����Zlg����._��^��i���?K�[����D�G��]h�JKJ|K(+*�������������zggg\\��A��L������|` IDAT�>�����������������$�I�"9�)f1�f4�~a� �v�c�����Z��Xc�����Bnzx#*�_�Zi�HF�v�[�l������$d�F��' ��=���+o:���u�Z)�m����`c�����{��n_�6���b��GK6�������[����/e/{X��*��SKNq��).����w�UZ7nLvv6AAA��{��w6l6l`���xyy��m[>Lvv6K�,��{�����_?��;G��9x� AAAl��Y��={�,}��%77���H���G��m��a�5S�X�h���BC�Q#(*��@�����������y��i[�x�K��C$pd�(+��^�NF�E��G��m��;�t���D���O
�9��a���+G��rpK��lG��-������<��2/���(��Z-��/����{���dR��6(��Cb���Lrt�h�J(8�A��d$>�`�>����x������+.���;N8��'8�o��#�x�*.��\r+=V�6�g���w�'��P��{�uK}p���`KYZ�q�BjT*����0�,Gb�� ��]��u�Q�2����\%ATlG��Y@�o���M�&��t���^�B8�1������-�����2<�<�?�������l�LD��3��fJ��6�a��I����h�p�����*�v����fff�e���?O`` �����%%%�_����O��A�pvv6��������s��Z�hAll,���Nvp3@c��{���E���(�c���'�~�� ��,�2��iF�4��O93}�)��k��'�*\L�Hb|"�I����s9���V"�t�#"�I��M������+V�����a��$0�l�p���?��1�L%�IL�^��B�(#�|���L2)��l�+��)D��<�(�DO(�)�`9�ht��[=����G��&*�Qlg��|��#�,J)EA!�|<� �H��N"��-miMk��|5AI�O�Ir@� ,+���&��*:�B�����N����m5�����H�Qf.W������|����!�7S���_3a��^&6�����P�[�����&w
���c��+�Ny�1����������`��Cpv02-	���O�>�J�����"�V)����k����o�$����x�����1�V��6x�}��G��+C��h8}Z��N\b"������o��}�<����1��b����5�W/n{�6n{�6�����I��HIa	m�iC��(B�C�:��a��;���T�Z��_~�%+W�d���888���JRR����V��Mb��U�q3�@��r��%��y����N��;<��!L]`XL1�$sD��e-o�6�8F!��-�D��6zr��O�?k�Q��tF���K$�L0������Cm��^�����������
�E9I�����V�$�
(�_������x����)�U{�j~_a�Yt���tgn����eP�A�-��� ��VIm��Z��='��k�Awib��q�����+�wc��D���$��7��=���'��z���]pr]n�*���������
�OB�$�!�P�Vb���qpv rd$�v��U�X���"�����UX����9�o���e��1f��}H5�����R<���O�}3�	'����H!E�>��?X�
���RJ�$�v�����-!��3O��!�E�
h�����������f�l�l�����\	�<
�cvwxrr2�Z�W0fQ��X!z6<���x����������r���|��+�h�$�.t�	�`Z�4LX����mi�����O�&o�N:���5�6h���g�U���E�����������4	nN���7��A���'��=������=2n�'=c��*�����	�y�X����N�������|?���W_W"GF5!
�f"J(--�Z"v���{ILLd��e
�LR~~>�����e�����*��s��e+[y�w(��B
�njO<���;���'=q�]��V���l\����8�E�8#%K(�8��#O��!�R��*���$��X
<�e�BY^����u�+H\`.H�#.f-P���(���I`���mwmc��H�M������9
*�i�)a)&����q�4s!%�t!A�w\��V�Rz��8�Q�����&���G����5]��=����wxG��Nx��NLe*O�$-i�y�SF����7`����ZU�;v��Cc�KL�O>�g��y����-770��M���	���`�~��I��7!�ix�I�S/���<�����PB�C�}?�s������|���Z��nd;��,�f�L���a��=�F�!..�~����7[LQ������_���{4�-���0��nv3�Y��QJ)Z�h�@�+j�����zL.��R�NvRL��T�QF_���B��k��H�V5��F�8;9��6��g2qy�M���p��iRM�;��,���O��6RQA�8#��H_���X�z�H<t�x��P48�)*��� �1B�C���3���.4�.k�5�v���5�Z�Y�E��_����u�.EJ������=.q�~���}����y�I�M�(G9��Z�?�=�A#1�����4�9�9�T�e��a�"����]b8 I�m�����S�9�HJ��T� �w��%�b�U�����?&0>^rH�1������/��O�V���=�A���;��z��"t"�IF�z2Y�D����-��+'8���-8���#�.�RH>�8�D8�LcD�iL0�4��~�E���Hd�f�]��)F��y$��J/�/��sA��h�hP:)Bv�r�,b)6n���xt�Q�_	��V3��*���B���-E�>��!VW�lVvH�co`B�Cu�@�L�ZML7aX���8����Z���+�8r�J\ ����'
7����;��U�������7�6�24�Iy�d�]���;��G�9���~�$-H�u\�k
H�Z�je�<99������j���c���V_<��va6'�-���>��&��I���=
w�\r9�yRI�H%���g{L��(#�0B!�@\p!@�\�����e-n���+!���:!1�M�q��r����n���W��qS�L
���X��K�'��>��/ �P�X?�t��I*��kc���z��b�.B���TK�$��j)=��?���E�q
RoZ}��k~1TIb R�0��m� ������V��&%U�v��;l��@����p� ��-�����o_��B�m����B�&N^��Wc������q�i���}�b�`WM777z�!v���������+={��Q���">�d�x�I[�r5P�.p�Klf3�XD�n)l\H���Q���2�u�/�x���n��W��|��o{��7�M.�
7<����9��+O�1�Z*��, S�^���2]�|���2�A��T#����Ve"K��Y{&����� ���n����������k;^�5
"���4$F4�e�q������_��WE_--����m�HOOg�*s8��! E.\�K/���S{Y+l�Ja�*`���_�����z��A{{4\\��Yj��>>��
^^`T{��P��Dc�!�G8�=��?�?i��H^������K����r�o��>7��!#C��5��.-��2s����D'$H\��+��z�cO��������{��DFF�����p�B�B�&�2�\Z�������������-�Xf��C�H���\K~Y�T�n��R���'�b�8'���\���:�\�uKY�����$�
H!�<��&�o�F����5�)vF��J(H ��L@H�!�UW�y���Q��Gt� V>�h�����F������Y��,yd	/���wV��`�kg�?��~��!��u������@����:!�
�r$^T�nS��@���~���=z4Y?g����!--�~��1x�`�� l�Ja�*���� 9���!3�Z)?���ffBv��"qp0�Aooi���

�DzxHS���a�����#D23���32���A6%]���[��F�g���i������I���z�(�A����������BT�d���	���q�|������7���d���+"""����T��F���x{{7�������a�E�
�
����,�8�T�u}�=jb�+\!M�\�2���FG8���<�C�)9$X����_|I�Ngs�f.sYo�lK��%�P���=��f��KsM��_���a{u1��a��
�� �;��� ��lq+r�����7@+�%�B���j�bv����7+&���=!J�J��o����&��z����ZU��!99��-��^�+dh(�#%)	N���~���

dP]�Q�^�R��AY���uGG!�0e��%���5��o??n���6''
�I�.��u����$A��h7���E��Hw^�<D��B�)yz��M��OJ����|����C��S���������[���[n���f��������]�Z����E�����?M�����6@<���U���@�7����4�J/q�4�����J(�vp.�h^AA��r))�'>��i���k�]�>j�^8{8c_n�i�k�rC���Y3������*�����:�k;�e��)@3�Z���
B
���%���C�~������3f�smC��@+Et��YY��~�������t��MZ�U����n��I�������+Q���Eia)�6�"y]2�������VC[�-�"^��_��7�G��J��JW�Wr`�����AGGG���O����n�:8���n#��U�o^Bn�<��()�V�K���buw�d���S�������|���'7)���I����$���#����QD0��&�D��f�nj�~f�In�%�,r��o���IN��r�3�1���������4'�|t�/�����7����H9u5�S9��R�Ar/Vu��N�u�g�p<��������s2�$�.����`����5�#1.��'7�����b2�w���`��x���+�-�q�^ppu�U\+�%#?5���|.&]\9|*�uK
�s���D�W�:���2N�X^.���{����rz�X-����X��%%��� ��rr�����R�~;KN������=u
'y?//!x���vq���)�[22 %E���b	
�&M�ED�n7��
�GO�b	�b!�}��]�pY''PJ)O8=AV5K&���T��e��
�xRB	N8����
���8�n���������N����R�F���
7�q6IN���������8rx�a�O�s[�mxi�p(w�CQ��'�=�e���m����@�)�1������[G�C���1�)�*��#�.�w��K=�w�����Z��v���V�������{����(����A�~"4��ww4NND-������{�4���DFFV��
|��i��H"��g��9Y�:[��ss�N���|;nn��r��vp�-uA!���_�%�R@�8�6���c�J�I&Z�R��V���EA!�,}��3����[#�lw�#�x��3>��+k�	���s��2�p�
w��y�K�%�rVOZ}��]��Q��x������<2�����������>��������v�@+������3���T��_����������(��^�E���Xl&���������xzr��a�~t7�m��3��!4����/6Fx�i2���E�;~�^�����a�8s��}�����6w`r��T���h�=&d�*����2���X��t;$D\�vv�rvp0X%������"���I,������hd��_��W��?7aL�������+�!�-m��ef�����z��u��v�E��x�����.#��l�?����b�Z��b2����J}��O>y�QL1���E.*���������htK|�T@���nq�����\KT�(N��������P�����u+�G�6��b�6h��v�����3�@$��[QZ����(�h4
..Et�(�o�- @����j2����M����	�
�F��\=���d'%�3��wcq���L�<�]�v�`��t�R)w�c�%�\]�x�n-�?l�������Gi4f�X��jAV�l����0/O�����j�\�;xP�� ��1�;v@��BR'N�)	��3g())�@`)K-2nnn�<��p��,.pc��K<N<EU�
H}�GYdq����O.�d�����$�&���@7��Ny��0�W��A`#�V
kW��u�
�����'9rDb���4������I�rE��W����\\��SV&���MB�����{�����Ye
����89Y>0�U\+n�w;f�������t������m������n�U�V�_��y��1d�n���������{�������'$���.X����������?������j�*���7����N7!�uK]�q�X���G����@BBS6R4���)����O�����o��Plq���Z)�
�"||��lY������C�x��
��%!���"�h..�v�2DGC��u�!����"��8P���#}��vx[�M^��/3d����|��(x��z���/c��%99�o��V�;�R��
��������o����p�}2k�����4p�,�G����D�
��o��������������L�^
�]�}T|�,]*!�>j�����d"(+�������w\����MHH`��U���EV��&zx4�{�f����\����p��Fm�xy�wHS��$��$� %E'%���H���h��ix|���mg�����<��a,9�����nT;�����G
,V�����p�������c�Lf���;��&L�v�<,[�<"7����pM��%%�X�����3`����!u���������">i)�C��pa�a������o�����;�O�]�\)!������K�Jo����*���y3�|w���������		�����k�Fm05r���B����I0�4&��W���K��J�)�6K���$�Kunj�FC��(Z�����~��Q�e��A�\si=��d��	���3�]����`�p�Y��H��Pc�B����������w@�B���IU8p�t�a�
�??)�����o��9��������B��KRb���j����q��
�T�lf���?��S���k~��gM���$p���������l�Ja�*���5����U��s���g�Z/�(���u�%�:�`�	����{�;�>�����n�:v����������57��c�=}�t���[�nt���,�c�Z���Z���l��-Y"=6V\��I�`�Mly$���ViWI���K��3d�*D�V��b�������:���K_u�3a��Q������wM�J���x���l���h�W��1�]|��=V���i��������L������V�H�^b���6h�xY��$SIP�xV��d�N�����L�?����d�����|w�=�
�����PRI��0�|�>cDGG��ys9x� ��sO��/VS��Yfqq�*^�ZR�L�#F�CI�K��gj�
��HI���VI��Jb���n��v�u�FbA4N��B$�����www��
����lN�8A#����ea#�V
kW_/���G��,"*��J��<<HM��4	��|Y����NK7sZ��Tb��DOO�J�����nN1�u��su-����_�N�m���sp�A��B��������-Lb��Z������X�t)�:u��;�4ISt��U?�����I/.��%�=(�u������U�F�r���
�F����h�\	�}?�(�4�����p�8C��+�Hz�5k������t����;�`6�������pp�I����v=zrsMI���R�v�>��./�������-^{{��0X&���<V�d���c���^}���V���f�` �~0=�����h��/� IDAT���y��|���,X������F�]e����yss������$n`�b�h�X~8;;a�qqb�SYH�����$1�
D���tj����,��7._����y�D�rN�i�7�9��
�6��' n�Uj�_� e'�����Zs���@�l�4�n�����$�6X6h�
����4�{�?�a�O|���G�+���RA��)!��vJ����$�M�f�-d;���5c�{[��q���X�n�2�RX����F�����|���i����X�H�&��~��l�6:t05���33
?`J�l_��v�.^�|E��������l0 )Ib3m�

D��s��/&F���M�}����[7���+����H�;;��?�d����@�?l���!!b��|}�U����H2�������<����v�)
>�Q4u	������A[����.��h�[]��p���
���4k��M�61o�<���j�@z�D�S�gUE����U��LM������_���1*ZQ����4�-8X���11������iSL2����.]*�w��M�
7l�J��T�
o��#>>��c�rv����U��S���(+�r{'�r��*�B��%h��.wd����,�[n��5k&ks�V\\\:t('N��'�4h..W���������c��A�.cF$ww�x^1�����n�����A�D�\�����e�g�����W���j��9q���5��KK�4�2""����-�`�n���V7@�]�/�����KMM���s��{=z�r��B�3GBu�����2����K�[�'����
�R�T��������q6����<aa����}�A���������_n���w��������N
������o���c��Au/Wl�������E�L���-[����<x0m���v���|�����R8���G���������v���|!a����b��S,)��R������+)n\]��	>(�����m���r96 X�
�!1Q�����5j�8qN��������.y�Ux����99BU8}����{NB,^��o�R�T�����i��f�'Oj������n������8�fl���'�>���'D��b�K+�!�������)�?II2�vs��-����m��y�}}���D�v�Z6m���-��FN��U�jo�OA���������������l��"����4�t&���?v,���1f�H������Ir�
���j���fUQ�	�g_R"�� ��xR�0�V
�V704k&�[�0T����@����k����r5��u����dt������F�6����C�}u��NC��V��k��_����lM�Jt|4�O�4C���%q,�D�bT��C	�2Y����)Y+��_Q����L�&[�xq.G����������r�����9s���s��>��{yI����&���UJ�:;K��#���gT5f�8!�Ud��={B�^�������@���^*�BC�
`x��DH�J��_y��c��ex�������4W�*�OK����R���T�v��Y�H����~�?1]�FN3��	�Iz����rQ��=�8��������^���IE�W]*@OO9�Z���O/���U5m��
64������~���{��#�����������8�#����W��o�s+~��xk�?�.�g����q��C���#��g��i?����,!�99�rs�������Z�H���r�/)��������W����<.�r����}�T\0\(qI�R����i.�L�N�1O�r�t�S���o��f�����k.�����,�������Ee������������f@������.���8���7'')}��qGGiW�)�������~��X�*�y��z�����Y�.��S����g�J�s�V��sR�B��U��Br#}���-��M�RT�[ ��\�Q�TK�������nn���k���PH�QQR��?���[y9��+�BB�}�����������n������(��7;o �������,^�X�BR�3f��5j�����3u�T:��������T[#)~�A�V�i�_;wJ?;������������s66M���5!�@�B�r������+��MAj'��dy�r�oO����tq�56777�O����GY�b�kQW����a^�\�o���I:E�Wn��t�����::Z���tiu��D�JJ�$@}}���:t�4���a^`n��PX�U�n���j%%A\���\(i��w��>cc����9�-rZ���RZ���l��\--��&yy���������@lll066f���l��
kkkIN�����9s�����;T@������cG�ST#"����/K
}����(�!
���]�v1p�@��@�+w+��	��+�8��?<�������������a�v���T*	

�����~����X���(�\ M��I�Nh]����{�U��<�xQ��5xp������SJ]���.����[����n��M��7����5h��
����b��ai���"=��)����WVJ?��(��b��I���>����x��Me=K�Q(�����_��G�D �V���O+���{��%[����[�n���������1��I��'��!�V���p��\.`C��������kv/��|��o�f��{	y)��OwE����;v���@~~>vvyddd�b�	�����L�Z[[�~�QQp��T�vJK�k6j�6?.	F�cS�j1��T]��S���|Y�\\�KW��6��E~~>��M����-.P�MO##i��s�j�Kn��Jx�,Y"�H�T��/��������Y*--)���F���!�6�R�$�����V�����9��������:r�<�V�`���\���+.�n�n����_�%%�/��sp���O��Qppu����n"^������?������aRUB����@���1�R��B�n�u	,���=���+����������R��i�������]zA��@.`���lh��l�&u���k�����G��zN������M���C�n�����_OYY&&&8sss��t�)S�������MI&&J��99R�be��=��#������������d@�.��U?�V{�{7��?���J)��Oi�DS���[C���T��K�[[��2t��f������cb�6H�\�!��Q]P�������j�2��q0m�R=��Tr+�������*��
Bs_#����g��&&����AQU���\�z���4&m��������W��J�K1insr��bn.�rh�X���g[X(=66�rC5��P�J��[�I��P����dF^3,��{�����S�Q�-�W�k�7���~����l�v��c�!
�����n
��p_|�}I=�������w��mb������������H��=IMMe���DDD�����.�k&���c��>��!�Y���HZ���Y���p���4H����J����1����]s����-)��������R�K�II(�����j�n@��1�`���>��(���H��m��&-/���^��5�;�_������//�~���_V&������uY�FF�
��J�����m�D��ma!%�j]��B��uy����DF�u'�!����}��-9�"�"�9o���A�PF�����aqqq��)�T�o�}��Q���E�"���jI�Q�cjJ�in�NI���������!G--z��J��J�4I��7���9�����ub�I��;r�@����:;��3fT����p���������e����A��	!�����"�"�ySUQ��>�C{���yzz2s�L�n����+3fnnnz?Oc���<��R��_��x����##��$/OZp���vTk''Kfj�]�"�G��:���n9��$
E�S�~���V���=#_Rc�^�.��ER�AC,���eLd����v��C������~Xm"PP!
�����n(��N���S|��k�5�>s��hd��k�y055e��������?�C=��������^���Mv64�	��F����m���]a��E��c���&X����8{j���~\����(��������D�J%�q���M�;�{yU���NQEE��')��,��hX���#�4'3���Hx�w2��v4"��Vr���T��sG�=�@.`�i�.���P(��W/6M�����y���k1�����Rk�_`` m��a��
�����O����S��9��q���t��]���4�,�P��uo�h�����|99�$!A�7nH%I���[b��j��)y%ll��Q778T�S�*v��V����\�F��v��>�T^^l������X������������B�k�C������_�N~~�6;��'��y��������g��o��wC@E����Aq��@g�����GY��G���
3%�����te���e���=z��+W2r�H�s����������7��k��������h
��\	�7Ky������*#FH
##)������g�4�������%UKK�K*
>��Iy�9{�,3g�d���t���+W<������$���SO�P(6l��C�h���<h.�{q{N�R�$t~(Fv`��M\��+#����*����=������a��
���2j�(�4�*r�T��M�=Hh��_H7
�����_��&���M�a�������m++��R���2D:���d�0@���i�|���~�������#����3d��n���E0����������'#B
�������N���GY�c9C?J�I�Wm��a������+V0v�X�3���������$'��h�b}����X����R\,��Q��K���Jl����{_�C�����K��&m��>����j�-4b�fYA����LN��,�Ji���R����%����$X������l3f�F��oF���"=G^��Di,��<���)��y��KGb���mGSSSF�Ell,��=�z�b��Ab�"��!;�mm�_?��AS��!����C*/�d��v�2�aY���FTj�^�.��ji<3S�))��JK�����[T$��ml$A����K�(���0jT{.<��q=z����r��
;;���@�4P�X~��~������f���},����KG���{���8q///\]]�|�Ad���$&&2z��Voi111���L{����Q�_���{lU��W7��������f5��@���X~���{�$*++�Q\@aa9���x�I����~�����p��p���������xG6M����2��Q�;��:���+����KB�A����|��W<���t����o�>����VZ
			���(#hz�,�(���������r��������oy�M������p��p7�~��:3{_{�u]F���Z�������O�B��o��<��3�����7RV������ ����;�VM@@�,u�9S��m1�4|��������l�2F�A��x�@6�4P�������p�6������c��;�2se����{g�y�f���������177����"))�Q�kmxyy����@?����������K]
))
�;}?�.��nw����baa��1c�l"�����3�����n����������T�4���J�b���2��k�r��u�ju��+�����1��Z�
��7bjj��]��[#_|�E��\Fj��6���r�J����;�� ���*���L�G;0��a�:�W�g�������������s��A�DN�@�BP��TTT�s���9+h(��c�\g�&E���G�����0?f��f���,����V]�����0q�w�^���HNN�w��0#9���X~���W��}�T�P(t�_zz:������[;� ���Dl(��j*�#Lzf��M|�%�"/Q�SLNL;g�d��b�-~W|���P(pwwg��Y����r�JRRR��Z			dff6w���������������+���A�VVV������5�7:���"))	WW��::+**HNN������(���HII������?����?�>�������,�De�b��Q�x�`��������?���X~��'���3���MEPP�,���������J�>}�;�
�\T���������������#=z����3:s,X���5>>>XYY���������SQQ�/�����������g�}�������"&L��3g��yTj��4�,?���������l�m	�W8�'w���~$�pF����
d��Y����l�2�_���s*�,?r��������w�@��\�t)���K�.����+W�`jj���c�����������?PRR��S����e��9��|��G����l�����8���;Y�p�v������8x� %%%l���U�V�_Q~\ �
�������������l|v#��s��`ii����6l7nd��-b�_ ��A
�K�.1{�l�}�YLMM���c������k�����=�'�xSSS�u��k����?�LVVj���K�2u�Ty�LMM			a����\����R
Y�js���o�����������/��CZ�=�yj&UU������������3{�l�-[���We9�@ �dR.Y��%K����P(��������g�2d��9C��������Ojjj�srss9�<����:������qC�7X"##���
�v�������h�x���u��>��6���q,����m�����,�533c��Q<��l���u��Q\\,�������;�VM\\��	,����tN�8��a<P�����|>��C�z�)HII���
�y��IIIZ�v�9�����qvv�>�����$d$�_��x��AL=0����f���(��G�{{{3k�,���Y�l������9.`�.`�.����`vv6�l�2�7y�����(�J����:���Z{��Xms������{�v����c��v�fe����������1����7��{���O?���(m��^��#W/`A5�����|�2=�UUU���;;;���t�yyyTUUaaa�-�p�MQ��s����:�~dee�P(�y���p��,Z����T���>�q�u���xM�!���-,,�����|��������*�\���v��~�g<�|�
O<�^^^,_����(�/_�3�������f���3�j�*���
&��8��+&��8nkk��={&9������H_l+�s�����3r�HV�X�S�������+�O���x�"�:ub�����Cdd��j��}�x������������N�8A�^��s~��g&L�@zz���5����������c�d���<����l�JOOg���XXX0j�(�HMMM%22�3f�vn�@ �M�
� W333:t(&L`��Uwo����W�^l��]g|��������o_�������u������akk��m���xO�'����-S�O����{.'�@�l�ruue��i����|�r>,���
��o��*���'r��)������7o��K/��o����_�����#GX�x1�?�<���(
^~�e�����_OAA;w�d����;�J����g���O?e��]�~�z~��^y��f��X~�X~N�8Azzz��+UJ��	��og�3���>���3�RIhh(�?�<W�\a��Uz�[������p�c���6o�LVV}�������W����2~�xy����>}�V0~�����/��"���L�8���LMM�={6o���v����Kvv6#G����KKK,X��i������S�N��1�vbbb		�����Ci�\�r[[��A�F����5�M�7�z�j��0�6�f��H�b�����


���������6l��R�d�����kkk������@yy9^^^�J�E���o�����h\��{�n�P6�����������3VVV��)))!%%www���v
IT��P IDATOO�������oG����(������=K��E;8�z�*NNN:F���V�9��8��}�����.����@�;a�������'Ojgggs��@� �������A_������dD��/���Gnn.YYY�M�h�+������v���s��������ciiy�9M����k������?��6r��<Y��z�w�3��!�a����i)���JVV�=�qqqDGG�w�^�������A��bL�
!�������_
��@�@��i��
3�f93������m1w��%�R�$00���<��=�o���R��{��t��KKK��/r#�@ 0x�"��y	SST�*N.;���9�rP�J�"�X666���Jjj*QQQ����M�6����}�SXX���
�B�1
A}1H�@�������������5�<���A��A�
m��_/�q6s{s�=��s�:��>�wwwF��k��Fpp0QQQ,^��-[��Q��v���+


������p�#V
��������o�=����<���||�|y)��������si�%�O�N��{6x{����N���m�[��JEPPP�n����p�#��"z���,?}�����jfgF��`�g��J��(��_�o�/�3��
�����"~���HJJ"&&Fg��c��(��m���m����>��#z7=B(�,?�,?MY��=��Q�G�Q8��E�e�T�*��
"xF0f���}�B���^^^2���Os���n�J��������P�5VVV��	+�#\��#\�M�����q�3{3���K��}���Q+�8��0�$xf0�=�~1�M�%fMhT|���DGGSPP�Z���������r�J���Q�T(
f��%����YP �8j��
�B�o�/�����:�_��������]1��-�^�Y��
-f����x���6�.p��5:t�@zz�A��l��������X������+s	�(����|��[fn!���;��
ooo���/����[�la���>|����S���G���G���!
�S�N5w��������;�V��+W���m�0�(�����_���������'~fE�D������I����"44��s�2z�hrrrX�d	���#>>��t�LHH 33S�h)))Z'�@4.`A�!��
�����]������.��}%���\����B�FJ�OX����=l�m��-a++�Z���������?��;wRQQA�=����}�m����2#\��#\�M�B]���.\���+IKk�-$�@�?����?
�g���.���*�;9�;��6���qt�&�K�q�������L�v�juGFF���!*�M�}���z�(X,�,�Q-�R�RI>����S8��i���(	�������N(�����fUp��!�;w�]�vQTTD��]���666M�@ h�(�abe"��y�����d�?��gN�g�2o�?�����]\�t�N�i�@ h�b��������(�
�#��vt��z\r���}����f�\�
E�*8q�D������?���O�t�.\�:U����x�.`�.��G@E���G�����\�������o
��9��^L���s��$��%������d]�j�SSS�u����899QRRB||<7nd��=;v�������#\��#\�M��6P�X~�X~�\N�5|8tp`����`'��`��Ux�����i����;��(�3�������woz���}��O?������/����;w�������E�.]D�= \��#\�M���p����,?M����t�$��0B�J�1�6�7)wpn_�=������G�2�K���XYYamm��������?��{����BPP]�t���T��~P���G�nz��F`liL��`zN����/s��C�_���s�����[7i<FFF������OEE���DGG�k�.���	$00c�{����Nbb��j�@ h=(z@c���'5*�??����������W�����aggW�s*�J+KJJ�t����l��
???������������jr��Z'B(���DDD4w��]�v1p�@�-&#'N����WW���Iqv�-�
3[3�w�s��cTUV��?����z���Wj������}W����

"((���b���8v��6m�C�t����������AG����o�HZ/b�����
��������7���������>(
b���]4Y�PW��	SCz�����I��5k:t����e������#\��#\�M�X4P�X~�X~Z�X��{�����bx�����I���c���4�e��}�/�m�|�������666���8@JJ
������###~��'���i��A\�.`�.��G@E���G����5�����:O0�b'��di�����<#�p�z���K/������[}�k��q��u���(..������177��9[
�,?���(��SG'�>���}�sk��m�6�-$Gq�����=lkk���������r&L�@aa!W�\!..�%K�`oo���?����P4M�d�@�?��&��W���N���������?��"�=��C�g��rv>�;|5����4V�*---����
���g���TVV������?����T���,Z����~[�x��&E���X~Zz/`}���~�t�*�
|�}��8�?�<*s+��d��5�E����������d�LQ��J�������p������>���3����O>a��5;v����&�InD/`����!
�������P���	�(��_���n��������>LIN�����|����C]��������/�&Mb��y�����e�X�t)�w�n��b���nz���b�.��{�b����_	�Y/�q���y+77� �������}{|���]����p���A���=��<�.`}�2S�lA�i�K/i�������>�tq���GGG���!I�����N#j����4.]������������E��.`�.��G�n�_�����r�J����;�&�47��_x��������������d�����v�����'9���$<H������M[X0���������x�N�Gs��o�y�a���������79��$g<�c�#
��#:�mR7�3;�&7���E��w��	$�����o���+��z�w�}^|S����\��7:?�����m�������p�����
����o����%����������oNqn�9Jg3(�+%�`������1Jc%C�3���N������E��0a���'..N�C��sgm�A�@ /B
��s` ��W�����E��C���;V�����2�t?�c�Nei)6l����;���"2����_�i��&/AKd��C���������G�Tp���(�
n^�I��l�;4c��F��������x��?��}�pvv&  ���@����;L���"L JKs��������8�o���CQ���zyQ�����;�ML���G������`y��x��4y��,?�,?���+�	���Ksx��'�(��������
�}~��������T*���=z4���aaa������_�t�R������7�t�
6������X~���+���S����h�0�L���:�}���l������c��Mei)�zz�e�x������������z�o~���4���BBB05��6�@?\�r[[[\]]�;�V��.`������sm�5���f���i�P[�86c���V�VRVX��C�u�022���___�
FRR����^�333���3�����>++���r���q�DKKA�~[
6B(����/�L����90���4����tvF�TRVP@EI	��*P(���?S����MFF�++�<f�����v���u�c\a$��
������-qD��Wv�aD:��L�P�������D��bB���\��B����///�������F�V���O`` m�����p��p7=B(��8���8�d	���\�����~��M�h����7{??n]����KQ3t�b6M��S�N���c����T����:�v���z�a��o���X~D/`������Q�����q�����.�C�>��/�N�1���{O��w[S������������y3t��������@�����e`��X�~ggc�pg�yIN�vvw|S/�����^����Y�[{���cP�J"�������#.2N�@}�a�RSS)))������JLMM177����B��1c�*��E!��Z$��?�����6_ ��S''��	c���H����U�Y��@�::QVXF~Ju~��L��9��c���:9����m�����������#������www�ES ���Jddd�2��Dv������	DFN�8����0��HLLvvvx����!(
�*�����TEFl7N������|>������G|yr��z��q���gffFHH���TTT���H||<[�l�v"����C���V��^�����~X���IOO'11Q�@�!
���n�����$$$P^^�(�;��WS^�>���������'�x(����,v_�G/��{�6�-�<P���R�H�����v�P�TZG1@aa!			\�t�={�`ff��E�������]���� k��n�4PZ��%"\��#\������,�V(8wv���3�3$cZ~J>I��H<���7v�v:
�NNx�z����0,����1XZZ���8>>�����/���M�`���!~��G���!
Cu�&�X~�X~��W_�=���b���2����t8�����2kfvf��`��R���
�R���1c���I�P(pww������P����~�:qqq�]��B����V��t�#\�M������I��<��.4w(�@��T�;���abe�Wi�/t~(�J5Y�H�J!�pQ+��K��M�6�
m�W�kCW�LLL��� �����p��e


P�T���������%
����QZ��P w�;��!��~���z�8�� @����O��x��E�E:��;'�>xzz�TJ�R������������t
������KKK<<<������jN�<���k�+������77��yZZg��A�T���
�������#�hn��S'XUU��+��������������$''���O3e���|�.`�.`�.`���Xnnw���zp�6�0�p"K;/�������<���S��~p�n��a����D������D���;v,eee$''�������7����n����Yfdd�s�N&N���x[����[����r***���������c�����q��-6m���Q����`���L�8�6m�PVV@AA�f������Zu�����X�x1���8991u�TN�>����y��pssc���r��@!\��#\��#\���/pSSs�������t%r~�y"gFbno�o�/mC���S'Qwa��o�hx;5W�LLLt��EEE$''���Hdd$���:��{	���J������SUU��Q������������TTT�?u�}�����OOOz��	H��~��WF�!Vb�I�����Y�z5���'11�����k�.�������k��3���l�$r��^g�XT�?`a���(\��#\��#���P������N�y}�y�������Wwbjc��!�
����s��w��a��AM����x6n�HQQ����Vvz[9sss����-uuu����a����������u�������tz3'''SYY���w������LMM�W�^��m����-��������]�/�Yk�h���'�]��7n�����#�;�,?�,?-m��.��#������!~w<��9��A*�+�b���n=�d��>�k��i{RS���?��n{���bmm-W�ddd����br��_����7:���WWWF�AAA��/����J�/`��A�v��N������4<<<���d��a�_���a��05�����k�<sfs�#��>�}�	���7�n���=��8��UUU�+��6�7�1��V��5P/9�
���J+322X�z5W�^%..���J�j5���>J��R���3�&M����Q1�Y��3f�*2���;`��-$''����������+++,,,(,,������"������I:�)S�������s'��]��/������5HA�������V	(d����n��Q�Y@qv1G?9
Upm�5�j5NN�E��x6�uo�&NNN��5K�8##�m����n�����7�����O>���ggg����''�FCC������?|�0NNN�����cGbbb ''���2��MNN~���C����+��Bll,�}����|��7���2{�lY����d��u��������<w��IN�:u�k���x����������������������;~an������NBB~~~<��888����JKtWZZ���BzL���5w8�E���G����%�������:�c���"�`w�-��2U��� �����0b��F��T*uV�


011�{��w����'33���L���9u�����������#]]][�����/k{+�={��{�B��������&&&����w�}�����;�xxx4��h
�y��o��c��7����R�1p��U&O���#Gx��G��?��3_}�;������V^�r����cooO��}�����?>�����_%**�Gy___���X�~=.���#��T��\�/�����D�C?���#�?�,?�,?-�,7�ok��u�������V����������M�kkk����nc���������IFFIII�:u���,LMM������{{{***HKK���@�zWW�f�	���:t�c<..N��fK8�BA����������Y
F�A������?s��e
qvv�g���7�.�_�v���`���w���[�n��O��������;m����?�������*�K/��}��9s		a���(
�����7����������=�M���"��Q7w8�D���G����5������P]�&�L����Z����p����^��P�U_
�wX�P`gg������R���^����|����r�
%%%���O���`jj���	S�L�m����^�M�B�V��7���~b��I�T*��k���%���\�~<X��o(			$''JDDUUU����:s��GEE7n��EEE�����'�0w�\���5k�<y2��������m[��]��O>���������~���\��!b.\���+IKKk�;n]|;h!/�D����;�@�qq�E�x���tEq��u]��xn��A�����$�P*{'L��VUUQRR"��_�x1S�N���[���_��wCS��PDqk$,,�}���z�:����[L�0�/��Rg�/))�1c���G���_�-(oo��n���uOOO.\�@TTVVV���O��t��E���u
Jt��
�Z���g)))�uNqq1�/_K�
�����^�J@�@����XWT�*����>?���#=&��������3��`jc����3����R��u����X�hu��������ooo|}}�����%��������P'��������m��-����&g6%�n����Sl��������(,,�����x��'�������y���Mii�}��O�q����+���`-tA������_�Sy��c��������?�1�7'�����=�#S�f����^�0??���$���9p�FFF�m�///���e-!S�]��H��muvyQ9���X�XjK����SQR��Z;o;F
��������m�Z>��Iv����b(77W[�)?~<*��v��` IDAT��gcbbBYY�'Of��i4���*��t�cjWUU�i��(**b�����������(--�������k�-]�x���B:u������q����
j�����c�Y���o�x�2~��a���i��%%%�o�^ge��l���.]�S�N:�
1��6^�ZD�"�
�
�Y�d���:��s'7���M�6t�����|���.E^�r����wuu�^��6�����#�0j�(rrr���'::�;v`ii����v����dee���W�8����3������y�3)�*zG4�^�F��6�^+�+����U��iF��x***���!~G<�$�B~j>?����]*��w&���������+]�PS?$$$4�	�Nv�%K���o�{�n������M�6��?0�|Y���W_}�y��i?4�~�m


8z��v)�v����	���}������
N�<y�[�^�������P^^~�9k�kJ��u���w�:���|CFF�A�S�xLL����!���������5�xZ�xRRu�/��?���G^^�v���3��C����t���gv''>�����%���u������)�D-��%��'jEH]�cooOpp0:t�k����{����O>a��������}�8q�QQQ$$$�9�[�nq3�&�]K���o�������T=Q��x;"~���g�Y�r%��D�t�}�;�#|�5�f�f�� ^S�����"�=�^>����f��{ZN�d	����hW�@��]�(����������L �q��u��k������!C���g��L�6M;g����?^���}��l����#Gj�|������������D��&���(�ow?����+i�������g����=�>��q��U�����m\'��I�����-�����r�s���#nK��L�q����M�J%rF$3�f��>���!!!���*
�q�j��ZMii)*�
'''<<<ppp��i�����w�s)��-d������>�����z������^�������y��s�X��e��L=0U���y��y��[����ey������w7jk��o


�����_�������_��BA��=�����Ga��5L�:U[�h��5���h�Gv�����Wk`UU?��#F�h��p-]�t�2���V����G��Q�O~�^
��\{_{���K�y})/*'�Hq�q�����J|��������[�n��� 5)����^�JJJ��� 33���n��ANNYYYaoo��dboo������J��]Y��2F���U�0Q��O��Y���-Su����<N�� L��]���"Lm��7�1�0�;�������

����Vqw��E�����������y�v�/&&�Z�����M�4����3}�t-ZDRR]�t!&&�U�V1o�<��k�G}���9r$C����#����={�h�����2r�H��G�~���kqqq�^�Zo��A����|��#���_LZH?J�@ �+����7�0���
�|8�9��v���
�6`�hAi~)���KK������i��`y��L�6KWy��������u�M����lm����Dn��Inn.Fj#���hS�����|��wxzz���[/��o3~c�����k��:���S���p��,�sGI����`�l�5
�:	�1c��~�z�t�r�s�v�b���l��A�����y������/7n$22ggg��]���c�s������?���/��u+>>>9rDG�2���W_i���y�[Z���xH����}����@ Z1����g?O���H=�J���N�Q�UDi^)���������3���8ut���S���iTU���(5�V��f�8���_�����7��� �C��������P2��������d�����L�������?��0�yIy�X����������w����s����������S^^��C�6l�^�z���t����B�����]��|��{�			!$$��q�IK�\��N��������G����pS���+w+�b���V�W���K��L2c3��=���9L��,LmMq���s�3��%q���Ug�)����S�����g2gV����[8�p���O�?a����[[�X��BF�����87/���[���1�
7�tT*�=�_���D�)�����[��U[Y�a��������LK�|7�G���^ ��E�:6�H�\�^��#z���,?)))@���P+���������sUUT���KN|��3I�J%��h�c�Q��8:��C{��W�h(�(��O����v�����7�|�B�B�{�9�����8>��cV�X�B)u)����{O]�K&�1���tz���u�fT_�)���!C���������������^���T����3D������G�X~D/`����'  �Y��TUC����T�ZM^bY��m�oDMUEUUP���#�Jc%J��i������Xv�}'�<��h�h���9���<��S�Y��~�z.\����=��M���%#G�d���������9>a>�x76u�\�j��qn�5��5��>�o��x��wQ��,@S#\��#\��#V���fQh}b�aM�����W(�z�b�m����:9_u��G�{���Bn��$�b+������..�t�V]��������=��4�b���������' �^|�E>\�~���a8WcA��1 ;���A����A���J/�_���R��?�k�����2�g��J���|{���tL�M��1t��.������g����:��;�����Q����!MB��S9�j���@��������K����J�����8�A��L�'pr�I�.ea�n�c�#�q��H~J>%9%��Tw�0�3���~BQ�P����h�#���Z\�������)����������).`�.`����PQ(�CM?]��'[fm���$J������J.o�����bdb���C�3���u��9�j5Z���Nbb"�{���)++"PFn����pk�����b����C�R��@�\���X���������u�f����[�
XkP��<��1?{>o�����������
��-O���~��tg���,���-3�H=�3��{��+W��SOi��^�s��o�>�a�����C�ZA+�Jkq�����l�;���_n�P�n
�X~�X~������2Si;��T�0=&����DM��H���i�V�n���������S��};�G�f���XYbY*��a��������E��h��&���0���I9q�����p��p������\�����`w���	���RM��,'�x(�����R+��{�����k���SO1v�X��������5{����&C�P�}�dN�ZeP �����{��)�����������O�a�kG��(���h��
s����?�e����\���s��D(�b���,\���+W�����������*(����0�V�@�@ri�%v�����2�V\F9�8wt��4x�f���j�%��}�d=�0�(�����,�xz���77nl�P��k����F�������7w�����;�VM\\��	,�>�0n�8������9nu�B�BN�������"�>s�4w��!
����I��j67�,?�,?�,?-�l��������������������'EvE�~o6]������@@��5��5t=���r�^m�8�X~���#j�LPP����F�&  �E:�
����.]�`oo���k111a���b�Ff�4PZ�X���	]��'z��f��k����D/h~~~���6w�///QjGf����X�DFF��}-,,������X+�~�iQPF�4=�O���_�������yXT���������*nh
����ZZ.i�ZZZf��/s�4���-�e�i��fY.���-����(.���������*gf`�����9s������������C!""=

���{�i���6�������:u*�(oC���#l��q��_�?x���CDDz�����������N�inn�!C��yT��G�TcMWe�0S��1,S��1,_jj*�;f�a��F�1��+=�8�v����u��?S��1,S��1,_NN���=���H5�p%k�?���[��?S��1,S��1,���'�u�}�����j�)��:O��S:
��|L���|L��������� ����(HKCjT���BDDd2X�A)�Jtz�	���KC����d�4R����y�dDo��r='r���)`����)`���?�F�R���Z��GPb�\�2,S��1,S��1�,��)���2D�)`����)`����)`��L F�TR�����=/����W����^�3((H/�c���������Ch�������w�3$����F�1c�~���BDD���$��y�d���!=""�F���2�p�������	����c
X>���c
X>���c
X�X)SJW�i�$��A���)`����)`���?�F��R��:N����~Bqn���b
X>���c
X>���c
X��6R���d�����q���������S��1,S��1,S���#�dtqO@"""S���N��C�}�
2��3�P���%�F�S���*:N��S_~)�}���)`����)`���?�F�TS��:O��������T�{0,S��1,S��1�,�����+��n
��D�X�pI����c
X>���c
X>����)`#e�)��:O���
�-�)`����)`����)`��@2Z���C��RO�"�d�,������,C����Qah�L5��7
�W����ML�i��k���)`����)`���?�F�TS�^��X(��k����rrB��E��EF��{1,S��1,S��1�,������rh��	2��/p���c
X>���c
X>����)`#�������g�\�~�L�������|L���|L����� 5=f�F~j*�~����BDD���$���i��P�T�b����b^SFDDT',�����o��O�?~{��zYS��1,S��1,S����H�j
�N�{�}���7H9u���b
X>���c
X>���c
X�X)��kf����7�������N�b
X>���c
X>���c
X�X)��o���OCh48�n]��ss�z�$  ��>�|}}���f�a4j^^^LK�������Q����0`f��U��?��z��
wwwt��	�/�F�����_�{��pssCpp0���t�B��>@pp0�����{w|�������N�Tb��U80>
32=""��(���2���[4h��?_�u������`��m�={6>��#����>������>�1c�`����2e
����+Vh�����o����~��o��Q�0e�l��E/�J��GP:<�(��o��5<��<yR������X����a��U�*���>����{O������|!�m���&M��3k�,��iSQ^^.JJJ�����3g�N�q���{�����|���������^�l��]�r���\����H8|�?���_~j���GEUEDD���C�Q�������F�+bcc
=�F-%%EDDDzF�_�~�_�~���(��l�G�A���k|������#t�����|DDD 991115�ILLDll,N�>������:u
YYY��bw�)�;���������g������_��|L���|L����ehggw����^����R4m�T������q��%������yC`
�v:<�(||���w�Z���c
X>���c
X>����A�\PPpvv�iwtt�R�D~~�-�T>������]Ca
������������G`S1;�X>����\��1,���YZXX
u���h`eeu�>�E]�#?���:�x5�����,4o�NNNP�������(++���6o��6m��n�����5J��
l�[{�I�����1f�&����v����5�W��Z'T���=����C6�u���d���F���?��m������3���kL�8Q�~���h�?����m[���/<x�����G��gO=zEEE���?���u����O9r$�\�r�S+���:>��3l����`PPlll(--�����in��������t�����j�0����lg;������###��|�������C�A*�1�:�U
800P<���:mk��J�R$%%���2���"^{�5�>����������";;[XXX�O>�D���Y������h4�S��%n���eKQZTT��L���|L���|L���S��1u�Tl��Y�����AXXF�///���a��)X�z�6Y����U�Va���������#�����0$''��l����M�B�0��1|����;���e����|L���|L����e�p�B(
(
����v��>^�f
`���:t(�u����@4i�
�+W��YN����m[�i���������%K�}>��#8;;�E�h��
��O����z_����o�|�1�^������/S��1,S��1,S��g��&%%i����y��:W�>}������7z��333��B�8q�.]���BBB���h48z�(��;�j����:V�^�����\C�����������w���BDDtWBCC@�5�F�������w��v��:t���
��������R�D�^��z�d�z��"����~�mG�6�p�����Q�&��J�a+V`���(��5�p����@#�c�C�A�������/�e�}�����X��2=��Cjj�����EGG#!!���h���?����������T;v���0),�S�uw�������QQ5>��|L���|L�����4RL����+B-��3f���S��1,S��1,S����Hq.���e�T�Q_~Y���� ���S���������������������s��@j�J%��Z�����0#���!""2
,���
B�q�p`�|C����(�4RL��o�����q����mL���|L���|L�@#�p������e����g�)+��>0,S��1,S����H1\�:���S@��d(KK
<���)`����)`���?�F�)`9�����w�A^b".o��Zm�!5jL���|L�����$����B�y{_~��C!""2�dr�����c�P��(��@5��c�n�a4*}����>=z��o��o��;wwm�^/�{{C
��9v�|}}y�D���prr�������hU&�[�nm��4^���HHH@��]
=��#�F�)`�\Z��J���#����)`����)`���?4RL���7]�w��O�?h��
����?	�u����d�:u��������1,S���#�F�)`����1������{�a���8=L��)`����)`���?�d�
��X�"���L
@2Y�^~���,�����4R�X���!,,�H�,�X>�,���s�@#��|7��"��1,S��1,S����H1,�}��+++�6���s������\��1�,�S������Z;�����|L���|L�@��$"��� �-�$"�����b
X�}�������}t���CY�%���c
X>���c
X�X)����9|+�"�S'�w�)`����)`���?�F�)`�jJ�
����)`����)`���?�F�)`�n���w�)`����)`���?�Dw�E 5,���@""j�X)����M
�VnW�$$�0#�����1,S��1,S����H1,_mS��r�"�������[�k�
S��1,S��1�,�S���M
�Vx:�����)`����)`�SzT3���

���T�����k�V������>�|}}
=���#� IDATF�	`�y�=cHTG9		(LO�=O>�?�}�;v�������>������� ""#�������������������P�*~��}�5�9�PC$""��k�S���%\U�0-2R��q�D�~�A���c����i��S��1,S��1�,�S���5|;���{_���������GC��|L���|L�O)�����������ph�������h=|���2V��u�4e�u��	666�F���|L�@#��|����S���uw��������B��D�<����4VL���|L�����$�g},�y���+�:|�A��+��{P(�$��-���#H��l�<��RC���L@#��|���-kWWL��ef��a�P�����6���c
X>���c
X�X)������s[[<��Opi����E^b�^�_����)`����)`�ch���Ov
�Vffx`�J=�������X��A_8�|�X>�,S�����b
X>Y)�������k���������o�>�L���|L����� �uz�	����G�����7�p���D�$20�A����]�3k�}����CDD&���b
X>}��o�;$S��/_�=�fAa�!����c
X>���c
X�lx��u\�t��WMI���4>|��I\&&&����HKK�9�Zc
X>C��o��^�I��������^�L���|L����5�p��e��5n�8m���B<���������#��������B���Bhh(|||0r�Hxzzb���P���X--���3T
�v�]\0i�~�b���:�
�IH���{
8����|L���|L�_�-�����gOdee�|���K�g��E8z�(��;���4\�rqqq�3g������������������38x� �{�=C��S���������Fee�1�6��uk���G{����(D�Xa������&�$�����������yyy1	,���#��g
�tuu��������=���k����)S��M@���1}�tl�����������������Y�f�v��a���o4�aQ��03��}�.O=��}�"#&��C""�F�A��G����]����L�����������������PRR����W�������������E��eh(���1�p����h�`NN��9��-[���vvv8p .\�HII�j�F*'''#99��}�)`��)\���D���L�F�q�ph�b�FG���U�Gdx��_��|L���|L�_�-}||��I|��W�����C��������C�VkC���:��|�V����o��N���q��y\�vM�KJJ������U���������F��Y����)���g���WM�xnn�LLD��cH:~eEEhr�=�OI��y�pz�&$;���9c^^.\���2�xc{\\\���8r{||<�\�b4�i��999���5���l��N�Z?�+��`�����w�F���������{c��������#G�E\vv������vvv����c�;���F��]R���=���[[�nExx8222t�ss{e
����~��US��0���<r=�y���1<<}���U��P<��J�K@SV�2��(����@������/^��������e4�i�����(**2���l��N��	}P�F�tHNN���76l������i����u+F����o�>�w�}8{�,����#G��g���>�����I����gg�[��������WkO7�t~�D��c����>�?�,A����1{6����o�QCDD
Sev���CR��A�����I��{�n�����l���F```�k��o�"((nnn5�����m�?"Ck��7����	{�"��,��E�����6��T��aoo���TL�:�}������3g�G��{���7O=�:w�����G���5k��@�TB�T������h�"�G���o~��Gl����kI���ys�2�Z�g������~�"/[���8a����&M0R""2z�����S�Nnnn�ppp�'O:����D�&M���!�-[���F�.����h���X�fM����k�	OO�z[���o�.e���_~�E��jC��]�|Y��9S,ss�g����l,"%%�`�o
���D||�����������XC�QKII��Q����������i���������HOO�Z�FNN��]WWW�~3g�Drr2�������W^yE�y�B�E�!##EEEHHH�SO=��U�������/N~~�������3>��?N����H�����K���s������\���`��,--���6s�����X>c��>�zx t�"�8{�����W/l;��U�{~�N)c�\��q.`�8�|�X�E�q.`��u.��f����E�0��e���������#I7]�\��q.`�8�|�X�X�;;t�5�.^D����y�h����0����H�d
���;3KK����AA��v
�z�2(��{	��t�DD��)�,���,S��iy�Yx�m����}�&H>q�QQPYX ����)+���p.`�8�|�X>��<h�N�8�����T!::=z��U���i]����}��g�����80>2bb�b����#������N����A����QZZ
___C���L�n���#i�*S�]�v5�PL@#��|�=�_��o������WQ����w�����?w.������8|8�t��Bq�eo�������Foz:u������`X>������b
X��� C��������I�4i4ee�v�(�����&���-�������������z'����|L�����;��$"�����}�*|����%K0��9<���po������iS�<��!��5=������G���k�����g�B�Y�P����{���?���o����� -
�����2����+�L��K""c�#�F�)`�L9\��]\�a�xX��Aee���$D�Y��c������i��X���CB����n���|L�����4R�X��:�!=���x9)	s220?/����.����>������S�����q.`�8�|�X�X)���c
X>ggg�t��W���K����7~x�q��� 2<����b������\��1�,�S����\��T5l�����faf\-Y�K���#__��5�/]2�H.�,���s�@"��������*�J��G��S��*kk|��'���[�@��h�DD�� I���_���[>��AK����O����p|��9-Z���\�����>�1d""���H1,S���v.`3KK�{�L����oG~r2>	���cqi��j����P��'c�
S��1,S����H1,S�����!''��^���\�
�/_���A�3{6���+B#�F�p1,S��1����b
X>�������(X::"x�4t�:�@dx8���vc�@�R�����G�0q.`����)`�ch����s�W�>�B���($��'�*����P(�8��
z��2z��r=��a�\��1,��?�Dd���MC������,[���/���W�A�A��Q^���GJD�0�@"2z������~Y:9�I��x|�n���G�������O{_���'
=d""���H1,S���6\VNN�6
S��O��+������1��}{^��������)`���?�F�)`���������50�������U�~�V�i���#����8��������4����|L�����@#��|L�W�����C�C�T��O����!aa8�c���{_z	��
C�I��b�@(
���������>^���o�f�H��sb�)`����)`�ch����)`�d��iao_��*++�{��{��%%���-�7g�23�a�xty�i��l)e�U���+�����@���c
X>��������d�{{���Yx��	<�k`m�>	��aa(���j""C�@"2I���#x�T��>��������k�>j�J�
Jss�7i��	`��+GGX::������wu4���X�4R;v�����
=�Fm��}���{aiii��4Z�����/<==
=����!��g������	������B��#92E���8'������������tt�V ����47������Pgg��7��s@���'L��)���'��sg���Y����U&�[�nm��4^���HHH@��]
=���H�8q��d������@�������h����#ZW�9�����~��}T�e��r��V+#W��:'%(/)���1���8/��2X9:����������po��0����u�JJ����]���Ne� �t�8��:�����aapj���9}:2ccank��_}�f�j���0@�ah����)`�d������V���rv�i1B�����v�8���Q�����8\�t	ig����M�~�2cca��g��k�������� ����vuu������#������3xh�Zxv��6n��0z�F\��!!���8����v�(��E����4RL���|�R�
���#�����v�<�m���!'!�/]B��3H����-[����Rma�q�0to�v�����3��4~L���j��T7�����������P���~L�@"�\Z����?6��+U*8��������<��������d	�m�
MY4ee��8�n���"���P����e�p���uKjt4��]g���r�q��'�<z4���Q���	{�BSVuv6vL��-p~�Z���
3�p�6X�`����}�,���	���������;8%���/�77�{{#��i��>��g��_~�&�;��sg4��M��G>����x�����t�}�E
C��qr�Z��U<��W(����O?���G���9sX9�F�)`�����S��4x��zY����q���28�������6MY2cc����H\���'N����7N={�;$v<�yGLW(S��~�,Z=������s�(U*8��jg��qu����7c
X�X)���c
X>cO7yyy����iS�Tpo�������I��MJBrd$�"#��S���������nm�J��;t����)�
*++��j�������q����
�:c����n�?��>�mG���e3�,�S��1,�)��
�nR�����_9		H9yq{���o�0#��RX��ASZ��}�h�o��W/t{��:���#�P�:/�f��rt�����0��	{���_�������-������K8�y3���[�:A&��r��?�~
M�������W#%%��C!"U����y�o(�����={%yy����H>v�-[� -
��(-,�k` \[��k�V����qw��{��PH)���1<<^��5`�7��:tH��� QfYO��p��E��i����[#R���;�m����MLD��3H?{���?7"����9 @�b��M�g���tp��1Q�ah��M����pC�Q�3g,X�{OI�����o���������hm����7G_I����,-������fd ��d��"��y\���>�������
>>������YK�|�]�[[������h;fTFzI���_Lx��z]nia!r���M�.�!������L��
@#�z�j��}��x���XJ�i�&888�����~BHH�^
���qs���|z��iB`���()(@^r2���G>���������a��OCh4������,`aoK{{X���;��������BsX�����%���OIA����������~���?�\f�;���aZdd�.����9����KTT6m��P�X��*++CII��������=��������:�K�C�P`���:mo*���Q�oyI	J��*�S��Aq^J��P�����\���Q����+WP�����vA}�:�FMy96>� J%ffP������h��3||���G__�yyA��������:o}���W����)�Y�,�dYZZ2,���3�o�#������}}���[���3X����?�Oe����/]B^R���q��%�DE!���s`aog�{y���������G__m�h�@�(,X�q��i�������������[�����u�y�*5!5�Q@��$��R�����t�lllL���>���q.��X9;k�Z��(/G~J
����{�����c��������������vv����J|���v.\��-������.���:s�ZW���r�������KLDQVb~�
33X�����M��^x���<x0v�����`�����xc���x�����Fz�.Tee����o?""���������M���i��O��DbD�99P_�����
��tAYQ��{**U*]��}]������jW}NSZ���T��>

4��(/)���C��0�BMiiEA��3X��BeeUQ��(2-���T�`���R���$h��ane�����80ol=<��L?��]�u�����
��]���U�?^��0�$""��zk��d���s��������
�aC��]Urd�-C ��2���������(+*B�Z�������87����lh���q�JQ^Z
Q^Mi).<������(+*��W_���l��a��	[������vM�T���
�����������U��%�{#��I��m��� ����b�@$���Fvv�����eee!++���0
��TdM�n��'�J���~6��gu����������!���t��#?%���(��@~j*����='�����0���K66���
�J%���s��8���u�_SV����I��R����������B�AiI	l�@x���������>��6��������p��I)���\�PRR�S�N�z��4T�222��e�����m,Q��?b�{g�� ��P���^Z�N��ujj�<=OOX�PS�����{�"=:ej5r�^E.�RYeeP������x�L0J�
7�nV�T���V^T��4(T*�YZ"b�N�EG#�����-L��:??_/��r*�:X�t)�����	!�P(�,�*pQm(o���������(��9d��@�T�>�k4X)�(J����m&TB�����=����}���`HDD����3�.XP/SV
|��X�����a����tp]����������x
������lY�,���o�������Q�~�$""�EFF��}��K*�����:��������m�tWx
�������jK""""��������$"""21,����L@""""��������-Z�h��A�RRRsss���z8
Nzz:N�<���<����8]Qff�v�I�[�)����JJJXXX�<����S�N���NNN5����l�<y���prr��|79s�bbb`eeU���EEE�������o��^�������\��w5tEEE8s��^�
;;;XYYU�SRR���O#%%...033�q9���������s�}LIVV233���P�����8y�$�j�-����p��I���������������SP*��|/�
AF#77W2D����x��D^^���� ����)S��R)\]]�J�~~~��?���)..�?�� ����B��z�iii�>yyyb��a:�a����5�5k� ���{����zK���'''ann.Z�h!N�:�}^���Y�f	�R)\\\����h����t���W�(%&&�����R)<<<����X�p�N��+W
aoo/���D�&M��C�t��i?��u��	''' ��m+����k����g��-���I���[[[���,�n���g��U:����S<xP�kb<4������G�f IDAT����

������*�������P�T"00P���h�/++O?��P(���U(�J,�]���SXX(F���9�����%}�Fd�����G���	!����M�6�g�6���7�xC���j	fee���P��Y3��h�B���;���Q�<yR!DRR�h���;v�v9/���h����p��B��/
1s�L=��q;x��pss�V���G(
�e�!�������������L!��_~),,,�����o���O�>�_#��q�=����d!�_�� �=*����B�T�+V���rQ\\,�~�i�����Ce���B�P����Nh4���/~�a��uk�~0U111B�R�+Vh����+�m��	!����������^������T,X�@������D!��N�fffb�����0u�T���!rss
�n�R\\,�*��i#���_c��w�	333�k�.!�999b��A�K�.��������_	!�HKK���b�������?_����3g�!�HHHb����W�Qah$�j�����/�i_�p�ptt%%%Y�1�|������m��YW�^B���'�O���g��UB�R���tQ\\,��E�t��������A�]�"77W�����+WV+G�%�u�����������B�^�z�#F����s� N�>-�X��:|��N��K���$g��!|}}�������B�T����!�=z������2"##�o�>�ka���_/���L�vWWW1o�<!D��vvv���P�|~~����K�.BT��������R�_}����xddd�g�yF�����X4H4H���_DDD�B�:�	&����q�P(">>^h4���!����O���~(���L���^b$��=���<���_�=44999���1���w�y����iKOO���\]]����+W� 44T�Ohh(���p��	�;w���5����\�;wN�j4/��2Z�n�i��U{.""��6���="""�����P�zSv��A����g��HOOGtt4�����E��R������S>>>h���v����t�����[������t���"00@�����;����}lmm��[7�m|�~���F���Mn������?���M��!p���j��}���R�DDD


p���?*_��e�����G�V#**���qch$������:�nnn*	tw���������������vVn�J��������r?��}��a��-X�zu�o���T�>*~A$''����P����������������������-[�����GGG���{�>�����G��m����b?�����M��Tu��
s�����c1�|,^����x���1~�x�����C
���S��^�R���	���HII�F�����B�@rr2W�#�F������dkk�"eF�w��e8�����%Kj�����R�c�Z]�>������<y2>��c���V{���eee5���CQQ�-�����l������8{�,���������l���a�����g�����R����P��(//Giii����}L]��-aee����#""����>�m\�>T�v?�w�l077����m��w��ch$*�y���u���������#G��Gt��	;w����%��mc����9s&BBB��O���J����%rrr�=������o���h���m����V#o��6�����OG�v��u�V��7�������`eeu��`��o�������o��/����;w�������k��;���6�m�p��{��g���4??%%%���g,����?T;mp��5����n����4h�O���7j�?����R���6�~����l�����<x0�����x�bL�8@��rbb��k�j5222ggg8;;W�������2���@�5d���P�T�����-Z�����HLLD��-��������n��x���D�=�mM�4A����w�^�������]�n�[���}����%�6mZm{egg#??������%?�����@����c�������U�Vh����F�p\�x#F���o��E�U�>���]�v���;u��o�t��-Z����_��!  ~~~�W�h���a��U�:u*y�<��#3f�g��6l�������!���v��=())��A��}j�����������8������:!���R�={�Z�0����9����8���CeSeoo���L��������ko?`�DDD 55U�|||<N�:��~��tN;?~���&��kr��{�R����}��������={����;:t�Pc///�o�^�z4�� ���>�LXYY�/��R������W�n�:C�Ax��D�����}��}���!��������111b��M���A���;���\�RXYY�u������f�aaa!���C����h4�ns��yaee%^x���?��}���V�Z�Q�Fi����_���L,\�P�={Vl��]xyy���{��at���/�o��&bcc��)S�������B���*\\\���#��S���!���E���Eyy�B�.kkk1e���?������V�Z��#Gr��BDD�P�T���^)))";;[|��:��)((~~~b������c�������B�V!*�Q���*{�1q��)��������gO�~0��]�~�>\�h�B�����B!N�<)����/� ��9#v��%|}}��O>�]��}��R�o������?���puus�������o��������DLL�X�~���������!chd�.]*<<<���%�/_n�!5:t���5~�����~k��>>>�puu.��a����k�C�&M��r�F������w�������c������O>�d��T~��G��U+@888��3g�>�7dff��{LXZZ
�}�����V:~������P(���R�3FgF!*n�}�~�}�*���[t��MX[[sss��m[�z�j�>���b�����L����!C����/�������1z�h�����51�6m���o���]�v�6m����N<������HgY7n~~~�pvvs���v��O?�Txyyiglz���t��Hw����2j���y)���fs?�
�q��\�z����T�����m|kB���W������2(����[��@����~o�w,����L�T!"""21,����L@""""��������$"""21,����L@""""��������$"""21,����L@""""��������$"""21,���ddd`���8y�����c��mx��'�x����������>\�rc�����s�D�H�$"�SXX�-[� 99��C�JHH����������[��e���o8z�h=�L���W�i�&��AD��=""bccQ^^��K��U�Vu^�������z��]�v�������>Z��&"��@"2�.`���(((�'�|��c�������C��}���o��;Bm[QQ������(G��-[���t���+=z4�|�M���x���1n�8�X��FgB�Z�
�<��q���C����r|����6mF���"--M����g1w�\���b��X�t�-�711���F�������O>AQQ���CX�f
��?���+o����Oc��Y=z4��������%%%a���HHH��]�v
�/��q�0i�$|���:�v��u����q����9c���������l�����f�\�xs����s������1n�8<�����k�-�OD�� Dbb"�.]��'���3���1x�`�?p��I,]�T�H)..���K�����'��`���pqqAPP�{�=L�>&L���?�5k��3gb���:cx��w�o�>�{��(--�C=��;wj�������h��%�
��"88������x,]��=��^�
{{��5>>]�t�����x=z���%K0d�������
vvvGG�[.��?�@HH�����_?�={��s�v{U�����K�"))	@E�}�=������a����{3g���f��mX�|9���f��!$$+W���I����())�J����3������?���033�������Gy��E,ADd���� ����m���fffb��B!V�\)���rm������o�B&�]�vi��7N(
q��m[��=���>*�">>^����y�F#�v�*
$����C���o��Oaa�h����3g�B��{�
���_���>������MT[�m��	!���{� ._�|�����K�1B����Tt��U����B! ��+�����������B�?^�m�Vk_�y�f�P(D\\�B�Q�F	KKK�����oKKK�>;v�0`����S��>}���s���b��-��&DdX<HD��h���������2
����}����k�N�-33S�u=�P�eDGG���+++4o��.]��K�������`9rDg9#G����N�8�����F�v�������N���:��j���_�����m*�
X�`�_��~t��
��]��O�6m@'8___�c???#''���v��x��w�b��3�V�ED�������c333�S��aii	+++�e8::�q�M�4�6���th4dffB�V�������y���q��%&&�{����=<<j�DNOO������_����������W_U{��41P���\FMf�����B|���X�`���0q�D��;W��%"��#�D�����z[VAAA��P*�������+���.^�����"�V\\\�A��rrr���Z��V~7��6
<<<�`�����W^��eV]ve%::S�N������
�qbHDF�2Q�x�L���������=�=����,�i�J.\��"400����+W� ;;�������C��M���o��yX�|y����������i�4w������h�Q��� ��?s������z�D�?,��h>��cddd���c����P(������~���V��k�.���O<�`�������)S���"l��
!!!���/����O����Oc��%(((@bb"f��oo�;^?x�r6l�����#))	�������_�TvM^x�8p����p��u$''����G�^����Q�1������s�p����0{�l<���8w�
q��E���!!!�^&�@"2+++������B��y������:u��o����0���c��iX�d	��iKKK�Mi����2\\\��Y3�6xyyi�����>�[o��5
'N����T��d������G��akk���g��7��s�=������?���o������v�Z�����������W]�Ju�K�����9s���_D��M�p�B���a���///����������^9z�h�[���9\\\������X���nnnOO�j��������TV���2e
�����m[������o�����]�����;w�����[w�mBD��w{�5����T+�Kii)����E���())��u�V�aiiY���u�j�Z'4S_����e�|,����LO��DDDD&� ��aHDDDdbX��DDDD&� ��aHDDDdbX��DDDD&� ��aHDDD���y�Eq����eA�
.��KS@T�h4���K���=�#z,����$�c�%9V��g!6��Qc���b)K�v�����9��*F�������t�y�m���=��$�bHPBBBBBBB�/�$%$$$$$$$�bHPBBBBBBB�/�$%$$$$$$$�bH��Z
�F�R��h4P��/5�W�F�Aqq�3]�������)))AQQ�QxAA�s�+�b���GYY��������3!#�W��������u�0x�`���{7������`����W���tss��#��g��Y�6b��X�z5n�����|�������HLLI����HOOGaa!���1|�p����_�5�N����2����N�:(�����s>|���CNN�n����{#//+V���K�`kk�����g����d���}�v=z3g����u�6999�7o�(���~~~h��-�����)..FJJ
N�<���<899!22}�����E��;v;v������P(��A0���"��[�"##��Oj�=��g�b����0[[[����c���|������W_a�������%v�����tL�2*��������z�j��
6D\\_AN%$��H���^�GRRZ�j���$\�v��f������rrr�������#((7n�x9���c�����x!q������x��7���bcc��h������h��1rrr�k
�Z
;;���?>�������n������h��!��Y`���0aB��_�~���HJJ�V�5�Nnn.���p��A��q7n���S�0}�t���c��"���t���">>j�...�{�.�
��� \�pAd_\\�>}� 22G�A�:u`nn�U�V�a���5k��~��=X�h���������$�={V(Wzz:�����O���_���b����G�HJJ���`��]CFF|}}�~�z�l����� �a(���:u*CCC�o�>���[E��^�J\�x��V�������x�myy9���YQQA�tuu���3���j�|���s��������F��1���kOyy�Q^�Z-M^SZZJ�Z��x

���Eaj��r��g������������S����z��ZYYQ������F������U����#���r.Z��(n�N����j�6����F�!IFGG�M�6��v��q���qLL}||XRR"�%$$P&���^�g���A���5Y��7��+**8|�p*
fee�$333Y�vmv���"���o3$$�����v4h� *�J����(���9p��Bx\\]]]M����_��x��QxQQ#""���jto���L��B��$���g2���b>x������Y
2??��m8//���5��?Y��G������3{�����Wo��):_^^NOOO�9R���e��y�����B�F���IZB�U"	��������j���U������;XTT$
���9h� ���o����-,,�R��|�r#x��A6m��T(�����m�H����T�T<u��(���OS�R���$�m��100�433cpp0:$��DN�0�}��arr2�����5����9f�*
`�v�X\\,\�m�6�r9�������/��C������P(��W/fgg�������%����{��$+_�����J%---iee���_xI�t:�T*�\��~~~�����>}:;u�D�R�m��]�Ux{{��?���R�����T�T,((��{���{w*
��rZZZr�����tB?��3}}})��imm���8���������B�����;y��)�{�k�.����E����c�
�c��?,I���S�����$�{�=�����������dLJJ"I^�t�2�L$��C�tww�������i�&����I��$�/_N�}�6������p`�Z�8v�X��=�M�4�Q(�5k�|�M �j�J�U�����}W*������/��B�Je���O>��d�3�R�x��6n��hff���X��������/<��u��{���iiil���������]�v	�.\���s��eT(�:u*I�o�����2Y�Uq����Z�o�>�d2fff>1���H�K_u&$$^>�|���q��}�������,�����>`YY�����_?ZZZ
���[T*�1b���YVV��3g����/^�N�c�z�8i�$QZ'N���;u:��7���s����j�,--�������k/�����G�������gQQO�+� IDAT/^�H�RI___~����h4<x� �r9�/_N�������%��G�VK�F�1c�P�T
^�����W�$��7o�Y�f����N���={�<�o��===y��y����'�����'
����l��!���k��j�����k���/_������t:���Q.�s��9�����``��}����[�n�$���Osss.Y�D�'88����eYY�T*������[�w�����3�I�������H�^�t�J����gZZ�s	�*/����I�^^^������H2((H_}�U��Q�-[F<{�,���s��1,,��_rr2�'��s�R&�	m�m��dff&���O'''�(
:88����'�Z-O�>Mggg����$+=����l��%���h0�{�n��U��f�"Y)�	y��o#I:t���eK�>}����\�f
GW�^����O����|��y�M�4a��IVzj���9f����R��r��)�U��]�F��?>U*�|�M���Wh��f����0%Y�n]���r���=z4�����^�?IJ�E��+������m��a�f�/�W_}EsssQW����	@��~�)mll��j�^OOOON�<�d��������z=�������8zzz�<��J�����#Y38v�X������Tk��=}||D�l���� U��<t���+�������4���2((H���V��2?~�����w)��D]�$��_���N(�B�`���E6>�\.������2$$��:u���.�_��/^]����$��9C��c�p^������������6/$���?R�Pp���BXEE������G�N���s���Y3P���`0���L$���W�^���%Y��,,,D���IOO��?���������W�{��I��s�e2W�\)��t:���	��k��6m�������r<xPd-�SX5�`���"��=�$��G���IT�G����s���8u�T�T*���N����3g��A�\�`���#5��j1%'L�@7776n��qqq�2e
����������?IJ�EQ���/�{���g�������k�0����[prr�T�������p|��9�����O>�iff&L<���Err2N�8��-[���C�������+�afff4�������+7o����{���� DFF<<<D�S*�����P�]�6JKKT�����kq��U //�%U��5k� 88}��A�ss�j����� �����U��jdgg�����k�����o���boff&�w��z��!%%���[		�O?����W#??j��n������{�5r�AAA())1��,^�&L�����_?!<11Z��~�i��y��b�������H|��w���I��r�t�'�c0 �W�;��d000333i@��E��A��\.G����p�B���$���333����8}������P�q��
��h������L0j?�����y�P���x<vvvB��x�"Du��U+�j�
@�smii��?�X�����s��s�" ���,�\�;v���������F��`�sf"@�c�G�<��9�����KH��H�?�.�K�.h��)��[���k����Db�
KKK���2����"�7�|SXZ#44�5��M���eK�_�-Z�@��
�8���wo4m��(��W�"))I8�������d2�K�T�?22���8p ���QPP��;w
6����z�*V�Z���T$''������C�6m���*�V������F��^���o�r�i������G��W/t��
��-C�Z�L��$�t��3g� ..����R��������|�$���x�X��7oF�=�sg��EBB��[����7��{����;�z����={�0���������Loo�j?�������P�����p��M��}���r0��������M�@�&M��-[����@e���-�SU�����ft����������R������llllPQQ��k>��)--�<����.]��>(�jg��z�����������uk�:u����?���xt�T����
T�:���������+�������o����;������\Q�F�-
����.`���OL+66.DRR~��|���8
�S���S�N��yY�d	
8 ���^���������G���x���1j�(\�|�����'z����{���N�<��]�b���F������_�o�>l��Y��{�#����R�<����{������'b��
8|��H�@ZZ�Z-���kt]XX�
�o���d�>>>{b���w�������77c���������1c��fff����Mz}6n����#O�)���+�y���Q������F����7�l���E�>���m���{����aee%x9��V�SS���p��IQ�V�EYY�������;w����~�x{{����[��������Tz�_p�Fh��JH������`0 66m������

E~~��G>%%|d=�V�Z!33?������1e����/BXLL����y�����D����U+�8qB$��Z-&L�P��z�������A����B`�����m�p���3�t�"�;V�B��V��������]+Jk��-�?���ddd���;������h0h� :�F���LD�h���r����KHH
���#�T���$�m����c��]F�����+�����7O5%>>VVV8p�h-D�}�6bbb��C�����a��a��9��u���`0`��YHKK��������(!!!8}����?����[[[���
a�/_��Qv��)����o7n+++������[�l�q��mx���@��>*.����ym���\�t	�����������V�V�Z�����������4i�(�:���'x��(;v�R���x��}9r-[�|��%$$�
��5j


���9s�n�:�d2l���7o����akkk4�����-B�^�����~��a��aP��8z�(<<<�P�����S't������e���qC�����;�������={�����{�a���x��70b�(�J�[�%%%�9s��ZO��w��+0j�(���a������0~���666������#����{��a�����������'� **
��O�����"DFF"33��/���sM�e��=�����|�r��q��]�[o�%����GBB�Q<M�6�����������K�����#�����+Wb��a�����3p��=(�J��� ""�����M��JUm��G�F��m�V�����RSS������S�NpttDVV�����������������-Z�@XX4
>���,,\���k���+tG?���#����bcc���G��/�DNN�{�n���g������;u��=z������u�������i��a��i�������n�� %�r����BCC1y�d\�v
EEE8~�85j�Lc�����s�����Aaa!V�Z���DXZZ"&&k��A��1j�(���#%%YYY�8q��2d���L~�`��J7Z��1c�������GRRT*�����/���6l�����_�rJHH<?f���Q����U�cyyyA�R���:t@QQ����J%�sqqAdd$���0p�@TTT���[puu�7�|�R�&M����r�o��6<<<p��edgg�u��X�t���r���P�T5j�H�(
<*�
�.]�Z�F����h�"a��N����;Z�nm�����<@�����������

E��
��aC���!33EEE1bbcc��ys������C�EXX�_��7n�����M����TN4			A~~>T*"##��={"++��]���=E"C���m�����'N ""��5l
���
ggg��������p!���(���������C^^�_��&M� 99������(++C�6m��kW��[���P(HLLD`` �����ys��BBB�����

2��J�Bhh���Va0`ee������8S5�#**J��~������C����V�QXXwww���c����.?��������u�^����ann����c��%F]���������_��M�����<��������7�
^���G�F�PTT��7�O��������V�I�&�Q�F8|�0`��Y�v��];4o�����p��U4k���-�7�d2���z����;\]]����R���`4h�����h��=�J�wYYZ�l	___���!66����v������g�!::@�g<::���CFFrss�%K����I���///Gpp�����.\�J����'"##�g���-Z�P9��c���u�>|���{c��eO�*
?&��H{KHHH�`����]����777l����eLBBB�9���%$$$L�w�^|���HHH@����c��8q�v�z�Y����x.$�����	t:-Z���T���C�M
�����oD�1�e`$$$$$$$$�bH��Z-,/���h4��{4��]M����|���������(����%^������������F�~�;w�������lt~�����grrr����-��8nnn1b>���?+�F��1�W�����_Z���=z 88��� �����(,,Dpp0�{{{��������SQVV���|��i�BLL�����>�~��!''[�nE�������+V���K���E�����gO��(o��G����3M.�����y����,--�����m��v+����)))8y�$����%����cr��c��a���}�6
4h�m�u�Vddd�O���={����������;v|��dnn����
c��}�|�/����i�&>|:�AAA>|�h�<���u�����c��F��-m#!!��#y_"z�IIIh�������;u������rrr�������#((7n�x9���c����v����r8p@X���w�All,4
�������7��������h���j������?NNN���A�n�p��M4l�k�����


0`�L�0����_����h$%%	;�TGnn.���p��A��qC��d��������D�������E||<�j5\\\p��]6AAA�p������}��Add$�9�:u�����V�B��
�����g�h�?JFF���p��Y�\���3f�����O?=wEt:z���q��A�P�N�:HNN��*�v���c�B&�������h���S�����x�P��1u�T���r��}��[���_�z��x�b!L�V�������"���rfgg����$�����3g���j������wII	�����?��c����8���r��h�Z������j����P�����j5�r9��9S�5999,++3�^���������F����������wY�V-��3G���\�h�Q�:��9994��m0���M�FC����f�6m�����������������DKHH�L&3�;�^���[3""����k�222�7n�WTTp���T(���"Ifff�v�����

D��o�fHH���E�h��AT*���o�Q?��s��+����8�����oMY�~=���+����"FDD�������l/
��, I2//�d����|�������d~~��p^^��k4�'��2����'�}�v����*���u�fffLNN&I����?.�\�x���w��(���m����&m	�W��|�t��
��G```������;v 66V����{���r�J8::���NNNX�b�Q\�Bhh(lmm���___l��p����������k~�����]���ov�ppp�tPS>��C���_}������������sc���R��,�l��
4���5T*���/���ec��a���A�:uP�V-���9998w���5�o��=z��������v�����@�:u0z�h�����N���c��Uh�����s��=h��
,--������p�^�z�W������/�����'O���=�j5����=z�����������q�����xN�<	����n��x���E[�������C�����������0a6n�(��
IaK�*���kdggc��i�tE�P ..:����+��m
e2��[g��tww��u�����e�����m���|����r�3f�@������x+���\�����{t���P*�2d���+�y{��54o��k����-�����D�.6@�x��;���������=���]C��makk///���c��I��t�_����F�����k������C������G��I�T*X[[c����:��g`oot���������M�4������������w�-Z�&M�`����S���������kg�������~���
a����S�������HMM�KKK������xd�t	���Z�������X��������I������9?�����1??����������u��J%G����b���q���477������X�^=N�4I�������N�N��~������:u*�Z-KKK�����v�����&���G�������gQQ/^�H�RI___~����h4<x� �r9�/_N�������%��G�VK�F�1c�P�T
^�����W�$��7o�Y�f����N���={�<�o��===y��y����'�����'
����l��!���k�:uJo�����k���/_������t:���Q.�s��9�����``��}����[�n�$���Osss.Y�D�'88����eYY�T*`~~>[�n���{3//��')>>�������$/]�D�R����3--�{I
^����$�����W/�q�dPP���H�_}�����&��-[F<{�,��{��;���0�U^(S@��;w.e2��f��m���@fff�������trrb@@�p�B�������R������tvv����K��c�����-[2++����w�f�Z�8k�,���?�L�6G�>>>����F�<t��e��<}�4����f���m�HV�"XYYq���������7��Iv���d������c��aii)�Z-�L��Z�j���k$����S�R��7����{��:k�,N�:�d�W��-[(��x���6U��G�>S�/��#��K_M��I�j*�����M6k�L������hnn.�J�~�:��O?���
�Z�`���������'��{^^^BW�^������GOOOQ��F��R���y�H�L�;�666,--���oOQ7���� H�j5:$�J��&JKK#Y),���D]\j�Z(����	@��{�.e2��K�$������	�T(�������������y���\����S�N�2=����������k���9x�`���3g�;v����zzyyu���[���$��G*
n��Y���`xx�p����t<w��5k���j4�����tu�������$+����u:�I���t���3���������z���g��$�;w�P&�q�����N����������i������z�^(���E6���B<5�UBi���"��=�$��G���IT�G����s���8u�T�T*���N����3g��A�\�`���#5�N#�����C6i��!!!O���}�6�����;����^����"m���{���gO��z���K�Ay��-899�f*������J8>w�lll��'���433&����"99'N�@��-q��!dee��w��0333�qimm]����7ob����qPP"##��)�J����f���]���*���v�Z\�z���aI�!C�`��5F�>�N�� IDAT}��CDEE������<�<H�����_�zj����pqqP���(������BBB�����D���Q�^=�����u!!!�����z�j���C�V���[���Pyo   @�F.�#((���/^�	&`�����������V+t'>���Ea�������`ii	����Bw�)���Q'2���fff&���h4h :������?.\��~�DPP�`cff��_�h�������aCh4��� 33��Opp06o�\�.�*����Nh/^D@@��N[�j�V�Z�|�---��������0z�j������PTT���T,Y�-Z�0Z����3����7o.
x*��k���|	dd���/9���;,0ps{�t�Ly��%$�d$�����K4m����C����sEEE"1U��������L&�����7���

E�F��i�&�l����G�-��aC!Fq����h�P)��������XA>���L&3�$HU�###�������s�`��W�b��UHMMErr2\\\�n�:�i��(���h�ZQ�1j�(�8�G��S��C#qw��Q�����u��e�P�V-�e"�.]����3���CDDlllD�����ZZZ>U�D||<V�X���7���eO�qx��V���;�z=M�={�la�������E 2�����~<����������E������"���=��`L��������<�&M��[�l���'��6���+GYU�����ft?��	������T��666"TTT<��OzvJKKE��c���K�.��T����w�&XYY!11@e�n��%&M���k�
6;w���o��Q�F��/�4�`�^<��g��xt�T�(/�&�G���_A��a\�r:t@�����7��0:;;#77W��hD�{zz���X�t������������~�_|��(�B��8�������~^�,Y�B�����SFv���������G��������F�^^^���8�IOc�����*'kt��'N4��V�/���}��a���"���9ggg�B��	(���{�K&N��
6����"�iii�j������uaaa6l���[�q��� ,,���w���/FVV�����?3f�J�aff�����h��*6n����#O�)���+�y���Q������F����7�l�&9T���C���������'������VVV���q/jU<5���
'O��i�Z����������s�N���g�����s�
$�����E�8q���g�������x����-77����c�(�Y��%$�BH����0���E��m�`��j��CCC���/��OIIY��U�V�����?�,���zL�2�<2�-&&w����y�PRR"��k��N�8!uZ�&L�V`�H������ �?�����d!0}�tl��M����]�tv&�z�Vu+������I��*�@���7������s:t�4

��C��H�U�	�H���W�\vp			�B���={�������m��x�b����H������}����4���I���������
���o�FLL0t�P@���1l�0��3[�n���5iii�={���%$$�O�6����>S�`kk���T!����B�=���;E]����C���aee���p���a��-�2n��M�
[��}T\���	����K�.����BXRR�����j��U+\�xQ4�����&M2���8���<��q��Q2W�^��Z-�9"xsss��3g>�����x�H����Q�PPP ������u��A&�a�����y3N�>
[[[�1W���X�hz��???�����
�Z����G�eO�������v����{�����en��!x`���9�������={��J�����{X�v-�x�
�1J����CII	f�����S����b�
�5
~~~��c���+��������
���1r�Hx{{���{X�t)>����`~��'��������`�DGG�������������1w�\�y��g������*�-_�7n���k���[o��������`O��M����1c�`����t�2220b����`���6lbbb0c���wJ�)))���x�8�i��A�RU�������m��U�s������T���~~~��������{��������n�������}��E�-�F���#++.4�Z���
���2r�H�{�����`������/�������{�-)S����S'���g�����[��
OOOL�6
��MCVV����u�V<x�)�7��������<y2�]����"?~�5z�1��������sg�������V�Bbb",--�5k��c��5j���������,L�8��q2yyy&?H���
6�E�0`�����O?�������7orssq��)�����'�������dT���jrF��X^^^P�T���G�PTT���A�R��\\\	3338�u�\]]��7�@�T�I�&����\.��o�
\�|���h��5�.]j4���*�
�F���B���C�R���KP�����3-Z$����tpwwG���M��������u����(KX^^���P4l�
6DXX233QTT�#F 66��7G~~>���1t�P��������q�,--1m�4>@�D��T*"##��={"++��]���=E"C���m�����'N ""��5l
���
ggg��������p!���(���������C^^�_��&M� 99������(++C�6m��kW��[���P(HLLD`` �����ys��BBB������z��������k�VVVh�����3U�;������'������8��_j����pwwG||<�����u���
����n��A������077G����d������
xxx������i�����~j�J�fgg������������Q�F(**���������O���������`�V���I�D�d��5K����k�����_���^��f��a��e��F�L��}�B��������+����T*�
�`0������m�VVV��-[���fff������-�]�;;;|��g���P����F�z�������\DEEa��%prr����������,����������������<����}��X�r�0�S�V���u��5j��������Rh���9���_F�XBBB���oG���E^������a��M�.c���,!!!a��{�����GBB����;v�����k��������s!y%$$$L����h�"������>>>2d�hR�����#�����������!-#!!!!!!!�C��A�D~~>���_I��Z�&EEE/%�?�����n}'!!!�����K�����;w.T*U��s�����o�����FKg�B�Vc���x�����_d�kDii)����7n�\�!??s��A^^_r�;�������4i"���e����L���}���///4l�����j�X�f
�-[�={�������������d��X��w�F���E�j�l���V�B�V���r<x���L�0EEE���3***�f�,]���m��;w�E�M��7�����}��o�>���W�^�N�� z���4,^�k�����;q��
xzzmCWEVV���[,_��6m��'`ff&,�R��7���___�Z�������b�
���U���������_������K�d�U�����%^
��,���p��UF6�5k���	�?��S����?===inn�!C����?��������������w�6�w���������s��CNN0%%�������y�OM��_�B�`AAu:�������	&�������-��#�?x��������?�N���}�R&�q��Z333�T*	����B���P(x��%j4j�Z�j��?~<����u��Y�f,++3��M����p��������(����[7���
�999l��-�r9�|�M�?�C����---����(���������:t(���:P.�3**�>l���	�|�z+**�{��Gsss��-���O477g�.]8c�����j��������9�|I$&&200�G�1)'L����p��������cT�T��a�7o��`aa�S|�j5�����/\�@;;;�X��;v�V�������?��7n��`NNu:]��<x��2��z���:***D/���h4���{��P����)--���D��
���
����?|����?+����RS��eK�����)��x��e���-[(��x��y����c�������&>>�u��	%������`0�L?77����&���z�n��F���?����p�|�r��r^�xQ;z�(p��m&�h��)���k���J�\�o��Fk��y��)�myy9��G�L�����?��#e2'M����
�5�����=;t� ��H��sg�����W��V�������#G��rss��cG;v����#l�6���W�����D�/�c������%%%&��C�X^^�����X^�v����;I>���a���iaaA�\���P���������IKKK�d2FFF�^�����x���4)/_��;w���3�o������#ZXXp��a����E����,�!88������	&�O�>\�|9U*����k��l
���8&&&
^'''���
6����6mG�I�BAl�����&??����---iaa���zK$�����kW*
ZZZ�������T���A�J���:88�i��$I�V�q�����NH?&&����$+_����7�o�����I�����������46n��T(�����]�HV�OOO�0����R��6IV�!A��k��3g�$I8�����(?���T�T�5kI2<<�����������={�a			���'Z[[s��)���$I�N�>��J��r��fff���3���kT���c�
�c��<x0���(���R�8m�4fff��@V�e\�t�Q�U��$��������g��}��:��M�4ahh���I�����+V�	����;wr��M��=*>���h0�s��j��}�(�����i2���H��R��W��H�%�$X���Gyx����r&$$����EEE<x0�R��Y�l�z=sss��[7���P���iJ>����{�R.�s��,..��3g����q�����t����Y^^���b����v����}���tttdLL���YXX��}��N�:��o��tqq���c������\�m�����B^���hoo�3f������_g���"���������H����������'��l��AAA�v����s�J%ty�D^�r���k�q�����l����������I��.]���� &-ZD� }��!����d�����=HVv�Z[[s��1,--�V���)SX�V-^�vM���"����>>>����W/FGG��l��j�����I�!!!���f��q��A$��^{���
����'�x�������?����R����������#I��?����<p�I���������o�-����KT*�<~�8���D������M�����j�����)����R��^������K��2e
e2����J�$���w��%��=���Z�&N�6�d�`���V�_�����M	���D��[�����;w.G�����)�IJ�U��K�?E����l���(����fff�K�m����"����;v�0��E�~��1""B�m�6._��d��x��A�����\.��E�HVv��UK�=�m�6��7H���������[p��e��d�g�Y�f���y�6l�@�p�sss���H��/����y��%���E6'N���-��	��{���a��VVV���/D���/����������~��p~��%������S�N�J�u#�t::;;s��$+=�U"���+�]�6?��s���;�}�:uVjj*mmm������������S'AX6�����������P���0A�V����s��M$�7n���#���'O����p\QQ���pN�:�$� I~��gtrr2�o���trr����M������lN�0�r�\�ktt4]]]�WU]�<y�?��3�6���� �_dp���	���&�L�2�^���������w�^A�2�@,@PkEP���"v�x�1��b 	�(`7D *j��@��
P�t�3���;�r@L��_r>����u��{�2��������dbb����w	/y���K���),,��c��v�n�������oC*�b���


<x�p��m��(((0��A^^N�<�����������4899q|G��>?|�������:uB����#fk:SR[[PYY�ll�����PYY�f1ZYYAAA���f�>}�UUU�J�������{���H$x��)kSS�=z������r����C���`�3f�`�����>gdd���
��_�����������G������'���1m�4����1z�h�o�^����oCUU�V��������www���h�"$&&������\�~eeepww�;w����k�R����]�bbcc���___dgg#!!ZZZl�pzz:���)����}666FAA��Y�������w���\�
6���41��+W0z�hxzzb��-��?v�g�3����������P��t3R���������D�y��D���DFFb���������[�lAppp����D!�O���!9��� 1��!>Dt�S�!�O����������<l���m;;;���R����k��	���J��H$r>��O����[kcNN�����pss��W������qeeePWW��khh����m�V���4M����
���BYY��i[���w��������&g����������N��>�k��}�������RWW���3�������������������?4{�R��@��{zz���������H�R$&&������(**Bff&���`gg===
p����,�H���"���shff�7n`��m8w����p����b@"������{�5k�`������/���?��u!!!8t�rr��l����!�H�94��#G��� |��r��9\\\�i�&����+��������(--e?\����C�B�{�����2���i���8����������M�@II	�f6===|��GHMM}��HP�����F*P�*�]�z���c+C�!���	^�����>|(g��PVV�7�|���b�vvv���
����l�t��/^pl�^��T*���&�b1{I� "�����q����m�p���c9��G�~�X�D�|�b1�y��9��K���r��t�R��&;��'����E?///hjj����(//���>����������n��������7n�@rr2������� 99III���dgg#==�S���5����)���
<���/����"44�m���@*����


������RQQ�P555|����3g�o������O�sbb"���1v�X�s�����i��w��f����G����V�}SD"�\�)��Gpp08��s����H$������#ttt��������r��fsFGG�F�_������mzzzm�FFF���CII	'��D"����k���.��
mkt�����fb�[-���~%��c����t�:88 ))���,++C`` ��tpp��S�8����\���w���#��?��M�>���KJJ�����xTUU���o�=�/_�D���;


�����������v8q����```+++hhh4�ckk��n���D�����C�X7 � l��[�l����������]W�p IDAT�������7o2[]]�,Y�l�������o��T������q��y\�r��
$$$����<ooo\�~��9r�������8�������@�X��a���+++
����'��"D��C�b��� "����K��v�={�������a��E��1��/11@��oI����`�����e������_?�9+W��V��^,\�iii		a���`���g�FMM
���W�"((>>>���9}�4�]������g�\���C�����;w2��/������<<<�>��X�p!rrrXh��]���DEEAMM
S�NEee%�jY�~="##��������L�---$$$�������L�������l�2��������8::"  �����o����~�z�������������w/

Y����C8q���n4EEEV��y�0p�@�������<y���J��j�*�����6.]����prr��q�������X&��-[��G�b��A�2e
JKK�m�6�?...o|�^������������<���b����+644~~~������Wq��1���B @[[6l���j�����k�����c�xyya���pssCvv6����������������by~~~������C1s�L��� &&999X�h���������������A�mmm�&$$�EG���#G�����������#G�j�*�E9`�����
��p��&���������������u�.^��P�a��a��-
�(**��3g�l�2,[�������s�3�&������
IIIHNN��svv��9s����������puu�����GTTT 99���8p�g����#22������g1d�hhh ==/^��Q����/�jnn�]�v���%%%�9�!�&���;066���!��e���5iii044����ahh������K������{�k��l�RPwp�P��544d��G�&O�������^�M��������~��DEE!55jjj���?Wg``��7o"22w�������]������%����g���c�f����in�(;v��7���������]cK�������8p�����P���0N7��!C����[7�1]�G�FUU����AAA��YYYa��9��o^�|��;w�����333-144�o���Y�o��y���A||<�]����{c�����jjj

�D��"�###�}����u������7o�c��8q�g�899a��%PWW�[n.00�Eb�C�!%%���?~<�L��������b2�����������=�������7�>�@��G������x�"444p��qxxx0SSS\�v
���x��\\\p��AN7���%n�����<y�vvv��s'���~������C�n�p��Yt��������e?�chh��� 6�h�4��Gv-[Z���1�m^�Q[[qqq�p�������
MMM�X�������m�|�����.NNNX�~�\�M,���$�q��u4^f����MdiZ�'�|+++9ryyyX�`f���������h~YN��3j�_����,������a�[yxxxxx�/�������������@�6`ii�Nf��������]�<<<<<<<<�0� �?^���![�����/������D������z��]n������5t�K�?DFF���C$��#�r��
���B,�9UDii)���#//��v�}���+l��YYY,MKS����i�&�|�~��{n��
�������^�z��hyt�����W�m#�JahhKKK|������FTT���p��9�#�q����|������G�v�������O��o����K����GGG,X�������@]]�����7����S������u��p�

��'p��y�?III���@}}=�u���1�����{7�����? ++Kn������`���Gtt4~��gN�YYY��q#LMM��;|S���q��a���'O������GN�*�T��b��=8~�8=z++�6�����!���T*�������6�o��V��������I]]����g�TvRR��bRRR��������s��U277'%%%=zt�>�����KRRR�Y�f��������PLL�_R�O?�D�=z�u��q������������:w�L,���g�����;���������������i���$(""���133�455	1����IQQ���������������tuui���4�|������QMMM�����P�.]�������i��1dkkK���N�^�b����LB�����h���4u�T244$UUU��k�\�;v� UUU222��S�����i��!$
����
�orr2�.��kVWWG���C����%K�P��]I,���/���������I[[����C�-"}}}�������O�����f��=�a����)%%�E�`�����������+$�������o�?$�������U���R*//����{����)""����LNN&]]]:z�(����	������ov�T*�������D"����������o���*�Bki^^UUU�����+����	o*������X�^XX�j�oJUU�k�K[��a
0������{t��}����c$���;DD4o�<������0���S���9B����T*m����"���mq�D"��>������kIOO�m����P(���4f�|�2�S�N�X���
�;V�GB��BCC��������(55��[[[K�}�	:}�4��8q�-Y�����8�\�r�ttth��!�������	��q���<yB


�e�""���!t��U����Fh��}��?��SD���%U������=q����������������>|�f���z��1�������9)++�P(���{��dv��Mb��TUUI ����e���S������Z�������l"�7�{��%@���4m�4�K�����S'v=z����d���4f�
'�HDH[[�<�||}}i��Y�a�����#���1'''Z�b}��'���H�����_��|���i������L������L>>>����C��
#EEERUU%555�>}:;���{���H$������%""�����>�����Y�~~~TVVFDD4n�8�����������LLL����(11�z��I������Hfff���?Q����a>{�l�D��$jx�����(suu���W�������������H$���`""����)S�p|222�;w��BBBHGG����:�D"!���S`` ijj�P($������\���y�f����3g�p��I�HMM��D"Z�beffr�@� ��7�|#W��� ������������������^�zQ�����W�^doo���������~������������\JJJ�;�C��|�r""z��1�{%CUU�6n��b���O>!j�V���m��{�5(�M`c�D����P(���������r�4i���2�"������D"���">|8���Puu�\�-	����LHH �PHTQQA�������g�}FD
��m�6������
����v�����9sHOO����(??����h�����}{;v,�����y������������LMMY[lmmIGG����QEE=|��z��I����g��1��[7�y�&��s��u�F�1�0`u���<x@D
��"��uy�E��������Cd����������������	;w�$eee� -,,$%%%��g���PHH5t�������s���WT]]MAAA���B<`�h,��b1���p��Q�F�������WQQ�K�.���u��������Q�>}h��i������uGDD����8q�^�zE������N�7o&"�m��������ODD���#233�	&p�MOO'MMM�z�*%&&r`yy9�X��}G*++��?�O�&�P��Ks�$%	u������GDDAAA$Z���!*	�^�xA���������KKK	�X����&��������?����"���q��%@�/_n��]�@�*�|����	&����VRRB


�������*"j�8�3g�������L�����S�(<<���E����/H(���;����PEE��=|��)@YYYDD4n�8���#�[0,,���������'�u��a@w�����"��;"������������	9r���h�"���"��	@�=X�j�UUU����[���������2�������g����n��p���$�8�������S'����ED
C�������]�v�v�Z�<y2�o��=�`��������b��-Z$w����LXL�6��b1�G���O�����2�+c���MDDYYY������t�R��l�������Y���$"���/�c��r�����I;v������C�������P(dm�8q"u�����d���_�_~����n	}}}&��V����r2d�����������-�>{��

���W�@�*-OS��[PXX�c����n������o��T*����9�x������066��QPP`>o���<�<y�m���
iiiprr���1�}~��!lmm9�;u��:���G��t����6�����,,,�LO�����-Roee�#�����STUUA*����������C"������MM}z�����2�]�C�����m��1�}����������*\�~�s�rrrP__�G������������i��������}{��o��
UUU�Z��cWVVf����.\��E��������5��_����2�����;ggg��n�T�f��X����pvv���/������---6S8==c��������>���k������QZZ��w�r���
P]]�?����+=z4<==�e����;v�3�hx~���GGG�P(|m��T�|�?y�_w�D"�<�o������qqq��g�����o���#F�o��kS�/^QQ�}��99��� &���?�t�s����yx�5��������7�mgggxzzB*���s��0a6R��D�g���066~km��������nnnx��'}DS������.g���@yy9�n��7�i�"CUUPUU���2Vw��
y�^�SQQMMM����w#77�mO�:�}n���,555�������3g��������/
QSS��/��~h�|�R)���������%����J�HLL���+���QTT���L$%%���zzz����YY"�EEEru�����7n���m�p��9XYY���3��000�D"AMMM��088k��������o_&��?�u�BBBp��!��G��=~����Cs9r

�_|!'������6mb����r�m���������R���)>�P(���1�g��@+++CAA�\:�������`j������8�6l`u
0K�,�����?��	&`��������,F%�����TTUU����jk���2���Zxx�V��oN�=���C9�X,���2����������k��m��O�f���K�x��c{���R)455!��KZ!//��������l���������=����bH$�#����������.]�����s��K���6��N�8~~~-�yyyASS��Gyy9���������X,Fvvv���������q����eee0���HJJ���' ;;��������q��]N�UUUx��|}}�������l;%%R����PPP@�N������
�B���������9s�}�v����~b�Q]]��c������-�M���{�6{
�=���DDD�z��"��"�M>|8���q����;Wn�D"All,������?>��+W6��0::7��:v��-�l�������.]����l�}���B�~����?3��s�0n�8l����c�-t��G[�5f�������'����������W$%%q^�eeed]�8u�'�������@91�.ptt����9����c��^tIII����*�����������h��s����B__���'N����`ee

�f}lmm_�-�"����8t�����u��MUU>>>8}�4bcc����b����iii�y�&����a��%���];888`����J��[��������+W0l�0@BB

aaa�������������lG�Auu5>��c��0|�p���h=6l@�>}`ee��Y9y�$[]��0t�P�?D���:ti�����3DGG���D�E���D����D
B�%������S�b��-o$��J�~�0r�H�\��#X��{�p�B���!$$�����q��}��=555�c�^���� ����V|6�����v�Z�g�����@FF;���))),b\TT???�^���������'.\�����k���QQQPSS���SQYY��Z��_���H���`����)S�@KK			���p�933������,[�vvv-�'&&������2���%%%�_��z�j���a�����G}}=���CCCm8t�N�8��MQQ��?o�<8�v�b/�'O�������Z�
={�l��K�.Ell,���0n�8dff"66�	�e������4h�L����Rl��
����������aee777���"//����;w.��


������p��U;v��������
�`�����s�p��5���������^^^>|8���������0r�$���������jDDD�X������0t�P��9:::���ANN-Z��<<<�z�jxxx019h� C[[�=�			r��#F`���pww���/*++q���Z��uQ0���pqq��A�p����s�	4�A�8::b������Djj*n����/B(b��a��e�B!���p��,[���-��u�0w�\�L�	!!!���BRR���9����1g��7.�)������/\]]1p�@���HNNFii)8�Y�e��������y�p��Y2HOO���1j��f����������c��]mj�g�}����_�~������:��=�/^������QTT���T��{{{,]��M/������]��1JAMF�
�XCCC�h<Q�qdh�����N����E�tLO��f�����/�����(���BMM
������g��p��MDFF���;������k1~�x������b���lzzzl��Y�8>�	���cG��qaaa��>���p��5�����n������;
��t�
2D�]��uCPP�5z�hT5���AAA��YYYa��9��o^�|��;w�����333-144�o���Y�o��y���A||<�]����{c�����jjj

�D��"�###�}����u������7o�c��8q���srr��%K���.��\`` ��***">>�BJJ
���1~�xL�2�3������2d����.���%{���ooooN}�G����q��Ehhh�������`>����v�"##���3��������n~KKK��ux��	����s�N6Vu�����s'����n���������3����~�5���AAAl�'�i����Z��[kc������6���p�����#;;���X�b�������l����e�]����~�zve��b���dty��HMM���q��-TTT`��i@�N�vvvX�lY���|�7#F-������F@���xxx����666lv+��e�1�<<<<<<<<�0x���,--���b����������������7�� IDAT��������@��N]]���������6��������W��3������Wo�\�������"##���|���r�+**�;w�@UU����0a'FSN�:�Q�F!''����>���C``����<o7778::��r322�w�^dgg����~�i��<77��m����9y�/^����CCC�%�NNNF\\���`jj�)S����y�����ddd�s���>}:K8�V�R)���q��%TWW���
3f��,�����HN�c�@�.]����
�
jv�+W����3x��aaaooo���6[Gaa!�����o�������<x0�
���`����8qb��4�
���SHHH@ee%,--1}�tN�"��'������2�����O�C9cccq��5�[���Kxxx�-|����O������gc��r�

��gOl��
��������38K=��s���BJJ
K�����������N�B��}�]%��0c�l�����GYY�n���+W������k�b���PPP@�~�����?�W�\a�=z��z����'add�7n����n�j��H�R�33g��T*�H$������O�nss��� ""YYY���BFF<���c���x��%������1c�������o�JJJ���oaii���`���9ccc�[�������#<x��c��o��x���-**���q���6�ws&O�����c��������%�/�OUU:v��]�v�g��(((hs]/_���	0q�Dl�������<<<������CK�,��� ������x�b���2���{	eddp|�������DDt��I@999re�|��^�z���]XXHUUUrv]]]
nS�^������WVVR^^����ZNyy9UVV��SRRB���mjW����_*�RAA�V__OyyyTVV�Fu������������R)u�����OR����***h��a��� wlXX������>m�����<yB���3DDTUUE������_s����
>�m�=�������DB~~~��w���][[K��2N�>M(99��233I Pxxx��������?��3�����+�����455����_�DBk��%���/_&EEE
�{����K]�v%���#"�{��:r�H��m���������������E��-#"�K�.:~�8����%��iS������I�&���[	@�����)"�����������`��h����
�[�nQJJ
�v��M@.\ ������#I ���
���Qhh(G�����U�HWW�TUUIAA�F�EyyyDD4y�d�������;w.���Quu5}��g���M������L~~~�����K#F� @���t��9�����4t�PRPP UUURWW�Y�f1���s�Dt��)ruu%�����7I$QRR���
�4f��V���5������Y�z��I�����[�n�k�.���b/O�TJ���#�HD***$�����_�NDDd``@�������B266��k���U�����jkk9�	

��]�REE-Z��F�EDD��]#���x��iii����I,3G����455��������4�������Qqq1)))���[��������D������B�������S&<��}{���/[,�%HD���D�&"���t���!C�P��]I"����;�������9s�s}�")::�����""*((���������{w
 ����������addD.$�TJ#G���C�r����~J}��a?�d���~L���"��/m�_/yZ���i�&j��;������A�/_&���X,������������DD���c���a/��G�z�������D"�Y�������O:::t��U"jx�������;�-�������G���TYYI.$


z��)I�R�������=zDDD/^$---Z�x1���������jkk���������V�o�&��woJII���Z:q��}����6SSS����M�6��W�(==�������������|@_}����/$�Hh��=$
�������zz��%yzz���>�������I$������h��)��gO������4@'N��������L������h��DD�g�RRR�����u+��3��l�BEEE�2$	������s��8����]�#G����������HqVV��������L(==�"""h��9��W_�{F�����3yyyQaa!���Qxx8	���o���������hMZYY��q��������G8�jJXX�[�nQuu5���P```��R��455i���D�6@���������x��K�HAA�>������IAA��=JD���|� ����HQQ�.]�$w,/yx�Zx��fx��%RUU�]�v1������s��.]��T*%�7o�G�����I���$�($$��?q�	������*RSS�u��q�'EEE&F_'eb�q�����������{��o�:y�$��9s�P����!R��/_�����Kw��!�u�V�O���)((���������5��g������q��!�~�����g��*{1K$ruu%oooJHH UUU�}�6�wrr��#G��p"�999$���'"�5k��H$";;;�>}:�X������s�������	

%333El,�����?0[AA
2�����������i��U,�t��a@�
"����������kG���DD��w�v�����w������djjJ>>>�����,..���`@��#�������\T�121{��)v-�m��j��z�b?��Vpc�
Fvvvdnn�l�����"���&wwwN�����#mmm���$sssZ�re���������z��&�����+Wb�������S���s|{���>���������������������������X�r%����4h���q��mTUU�����2rrrP__�G���������8dgg��q��!==��{wf���d��ccc���r��k�.TTT0[S������Gi�OSz�������;�O�>E�.]}����<|�������%�B!=z
�������gO$&&b��u����9��M��/����#G�������MHH�X,�|�a�@qq1�.]
����aaa����c�����������g�BCC���n���k�����[��k��HMM��-[��GL�0������X�z5��/����b�
$$$����PQQ����9eWTT���ru��{^^^����w�}��6^�rEn�o����}�v�3@��`�T
�T
�f���J
�lv��&EH��Vg��Yz�����2���#<<NNNr��333������p���)S�� ###|��o�m/D��2��i�� ���C]�d=A�x�w
/y^��
���_b��=�2e
�������jjj����\mm-K�!c����D|��7������K�>}��L��QSS�)C]]3g�����\��9����=p�@�g�9l�gS�"����`�*����-�1m�i��z�PUU�0WF�%D���2���s�SRR���2�����S�N���+���+'4�����@8pK�,��#G0u�T�b?w���f�����!C���
�D����3�;wFVV�q���Kdee���D��SSS�}�v$''3Q������j��7���o�n�9���������\��}��}{�-!!�����I��u��6�$���'"##��H$���G�����(2����LMM��S'���7+PeH�R<z��m����$5�O `��q��u����2df�����D�����5j�����f�Ovvv8v��
���-!P�Z���P�h�@m[�7����/yZe�����a���1p�@�>%%%�D"��Knn.���S'�������/n�GGG��� ::�Ed/&��'N���_��%'���x��9'�YQQ���Y=��?G�������CUU;v���x4M�"��zzz����b<��c�%�n�~��5kPSS����c��i�}�6Cjjj�4i<www���c���D����9�4YN���hii1{}}=���QZZ��W�������?�m!!!���� �H��G�_�i>?SSS����Swc$	#b���v�j1�4�=z4BBB�`�������	[[�V}<==�������7��h�����������8~�86m��l�4..9rd����~��s��}B��������q��ML�>��SUU����g�f���_1l�0,Z������;w�v�Z���K�.���������������R��	��?������K;�y�'hi���WIAA���C�![[[6H*����#g���3���sf�������9�W�^M���4f��<y2�KKK6lg�����)88��^7����TTT���?g6�������������O$��������������6���{w����133#]]]N���R��uj�����[�n�k����JKK#���9JJJt��E����^�z���/�Y�G��I�r��

��t@�^�"]]]Z�p!����n��_7P6�����1c�m[[[>|8�=[WWG666l���?�������`�����������/�h�����$��|��'������h�F6N�����~#eee3f��#���;��kW4h{���@��"�d%kc�~�h��D�p�MLLZ��R]]M�z��3f��_M:th6%?�������CIMM��M�iii())ac���3f���?G�v��g�Nh���=.������������d�u�n�����puu��Q�PTT���0�=��=����u��!--
g�����c�xyya���pssCvv6���������C�]�+V���g�`hh���(8::b��QPRRBHH�/_���b|�����CZZ��]�����>>>x��1v�����W���&��������#77��oG`` �w����
L�<�f�bc�"##��_?�1�%�������N�<��G�����;�>}��C������;v���YYY�����c�����e����������)S0u�T\�zb��������Duw�����������8�<�={������O>�����ahh��������������x��n����}666l<��a��m������c��_?������
�.]BNNv����c�r����A@@LLL0t�Phkk#33���prrBtt��3�n�:���sl:t������NDEEa�������H$Brr2������x@xx8��������enn���|��(**b]��-BLL�O����8<y��=���~~~PPP@����b��7��<<<�5�5�x�Q������g�D���@���!� ������������d��VVV022���9���������B�?�f����
������������������2,X�%K�p^d:t��������gbb���###�M��1c�������#���[<oGGG899����(..����n�����
���l������������"�D"���:u���������5z��	"���\]]9��jjj`gg��re��o���.\�K�.����/��y���h�E"&O����z��s�+W��g�}���NYY_~�%;O��� ;;���������w�w�f]�W�\���3z���i������������|:{���c��������uMS�	�n���G������������BCC��������+&L����"�������~�-g\����������g���W�^������H�������{�


�����������[<O�������1|�pH$BII	^^^��g,w���>���������������z�j|������D�Pssst����'�9K���P(����aff������������l�hYY:w��N�:�]�n���{���|�2�/_��<
�B899���011���*222 �`bb'''t�����l���k,��E+yx�7���������G�f���MUUz���I�&���<<<<<O���yx�����S9r$$	-Z�W7�������@�����X���Ivv6���a``�.@SS�����������w�����������#�<<<<<<<<�0x����tU��Eqq1jjj�{��$***8+��-jjj�I�<<<<�D�.`��R__���wC*�"00Pn���/���thiia���9rd����������B"���uMS__%%%���s����]���1w�\��5p��MDEE�����w���~��)�����~�-&L��Y�y��I����ann�{��N�>���$�|��6m���9�=~�aaax��	���0s�L��(m����������
�T���{c���l���

���"
��[7���`�����������PQQ���5|||Z������h��s555���3<<<�R�dee��[k)�����
�_������I����-[�������������GyZ����ppp������w�����,--���y{{���Z<rss�����
>y�$������	ccc�x[�.sc���1i�$l����������`��m�v�`������6������&����gO\�x&&&HLL���=�<y���������3������555��_��G}���������p��dee!++������{���///N$���...:t(n����!�������w��+���033��m�����y�&�
WWW���O�>�������6�w[X�t)����?�$��O?���q�����2�6�W-A���Oqq1iiiQHHM�>�lll�|�������*+�?{�����g�FdP�( E��X��"���&�b�b��hbI4�I���kD�`7z��APQA:�<�?��	�$�s�9��f-���y�>{��<���+�-ZD�L���L=:��U�@�U����7���*
�F�����WW�Gnn.����M�������E!������,((�0OFF�^}_GNN�^�����/^��������s������*��?�H'''�d~~>mmm9q�D!�������3/\��w������ysV�R�[�l��������$�'O�7n�(��h4trr�������/�u�F�FC�,,,d����w�^��

����g_�z5�r�R�,	�����+�|||��o_={tt4�r9���k���CZZZ���Z����)��%����K�L�i����{������;u�$�N�8A<u�T����>|�*��
4��I�����7����u����E��Z��(���������|	�w�$%�%++����I2,,�Lx�����C��w��!Y"���oO422b����p�B�,((���Y�zu*�Jr����b���������P��������J��U�V��q�aUY����S&��������
��n�b���)���T*iff�?�PW�^�J�������Y3�\.g�>}��9r�*��������A400��Q�D��_����+V�```�P?___&%%�,�*��6l���#����k�F.��]�v�w�I�����Q����$���icc��+W2,,���5���������!|�C���1cH�����\.�VW�\a�j��X�:���?����hff�������k��'O�|�k���{�&I���*�cM��&M�����U�T����(6o\\�9":'??��jU�d��dI���C���]�m����������l���`���f�f���O��Y#|��`ddd�����,��R�N�]��m���V�ePP�t�"���1c��qc�����S�Rq����W��;���{��.J��%�yHP�R�'�"<<��j���
 IDATz'&L�@+++^�~�Z���b��5Ep��I�������u�NNN����o����!����r���X�J�\��$9`��������$KD���%�N�J�r����������+=z��/_r��a�Y�&333YPP@GGG�m��O�>�V�����Y�J~���$���X`�F���/�P�Vs����^�#G���i�+W�P�V���
��T�V�5j����������/���V����@~����t�I��O>�R������j���S������M���O���� ��v�J���c |?d�����3g������s�N�L;;;���r��%�0a#""�z

������H���������dQQ���8h� ��j���k��T*����m�������������_���gB�����^�:����,��j.Z��
���/_.�;����\.���(O����|�Ir�����d"q�*���'>{�������?��������Y�f���d�&M�����1�=z� I�$������	�����T(<s���G��2  @�������A����p���T(� ����s��i�|��
`~~>�V������^��
��/_�djj*


�v�Z!}���T*����`rr2e2��Y#��p�BV�^������:a+�����p�B&%%���� ������/���H���_'._�\��V�Z�X���!n��MH/..f�*U��W_�[?v��Yd���Ohhh���<�CBBDy8r��2�����$K����7'N����7�������$K^��������_(�������/���@�c�����4h�	&p����]�6���DC��f�b�f��� ���N��dbb"�7oNSSS:::R�Tr��eB����ihh���[s����={6===Y�fM����$/^LCCC���]g�a������������e	���N�2�r��������Z�jU�k�������mY�96662d��;���?�������$� Y�k[�zu��������g���Z����IJH���i��P���{L�2dff"33S����S����{�����K����
�'O�@�� !!>>>���"##1j�(��m���wo������� ��'O����"����HII&����s'�����a�����[��d�_��`����G}�dQP���4
4��#GD�W����#77��<r�������AEx{{��=<<�V����3��S��qc!]�V���Gz����P���U�V022��M���eKl��	����d2����X�h���+c��m���������#h��)T*�$q��)�m�0e������o��G}��~�
+V������PT��IL�>������@��5q��I��?
4@�.]@j���
CXX���?F�
��'�`��M������	����WF�{H��s���wot��
��-{m=w�������7n������\��T�G��
y��7mx�9���000xm�iii3f"""`kk[n�#F`��}h��91o�<!-!!��O��={`ff����,�6\������v�m<�z��q�m���v��!��[�����E���7������o#	@�7�$����f����={���������sJo��{���7���1���`dd		����������B�>}�(((����DXX������~�z��sG8F^^�}yfeeA�Pu�Q�Z5�����144,�Gi*��UtK��-K��555�����V�E�j����������x���H�%/�9s�`���<x0v����>�LH?r����(���UC�6m[�Z���eK\�p���1b�����(��RRR���X��-���Gdd$n��
www@���������_�y�� PD���{w���(�g�r��~�����+w���

���31o��
�8*���/��k�����n��HIIAff&�W�^��@.��n���j���de
TYYYHMM����k��#::Z�����U�"00&L���Z�h!|W���D||<�����iS�����w�q�9r$���(�(**BFF���Q�n�
��������Cs�9�/�}����j�-�0�
��N���?�$%���S�b��m8s��^����d2���D���d�o�Kq��!:th�����{��Avv6lll���1v�X�}�t��k�������F�AZZ,--�����T*888@�� %%E�����Oagg��{]�"%%Et������*3���LMME����g�@��Jz�j���=z $$.\���z���-[����������?��y��YQo���#������+�/\\\CCC$$$�����{��^�����/|���x��	^���{000����������~�I(RSSE���hAboo�R�U�V��n:v�����P�Y�����UT*�4iRa�=z`����y3&L���^\\��;w�����-[b��m�={v������ �!�:���{��C���%q��!dee���Y�~��yl������������1�|DDD`������7�������S�N���c���X�t)
+�Qdgj���W��*CXT��j��&ao������w7�,��Dys��g��������������j�����-qwwg���E��=��laaa���'}}}9c��^XXH+++�yn�v������{��Q&�q��������3g���}���^TTDggg0���sKO|'Ia��n����Ey,,,��eabbBWWWa�I<�����V�-�;u�DQ[~����R���0b���422��7���E��_�����a�*U��G��+���411me���sV�Z�K�,lIII�Z��h����n�����l���@��h4����������V)����W\\�Y�f	�]?~LSSS��]*Z�*AAA433��'Dv�60<}��`?z�(e2�{�=���t����E���
��W�������{$�/���5j���'O��!���x7H=�����W.]����t00`�����f��J���E���?~<���1u�T�5
���78ggg��wO����@`` z�����;�������0y�d��PHHPPP�5k�vCCC�X�C�Avv6Z�n���8�^�K�.��5���b��)��7`aa��k��W�^����L&Cxx8�N������;w�DVV,X�g���������_���[�b����-Y����G@@z�����8|��w�?>���������0��;
4P��[�nD�;v���=8��g�
�cbb���/������_|���'����������[����q���������u����{#,,VVV8u��=�}���r�J���O�>���7�����j�������^�za��1������{q��m�=�h�"�������8q������?~���7n������C�m�
4@NNN�8���Ll��Y4���cG�[�|�<�N�:�Z�j�u�N�>�^�za���z�������\dsuu�w�}����c��yx�����M�:;v�����]��s		���O>���w]	��'iiix��9T*4h L��;99!##����Q�F�U�5j�F�!))	���3f���SSS������NNN0`RRRp��=���`��9z!�����V��>}���4h��� <y�������/^��}�
y
��}�r�k@��]������8���c��QX�h���K�.h��1���p��}�l��W���I���:t�p��iS���C�����;v��,,,���_�!�>��s�;����Q\\�y����&��F[[[2����q��J%-Z�������akk�Y�f	����+++����y��J���syyyX�����S�N!00P�x�7o�: 11/^�@�~��r��
�����_b�j�044D�`ff�L�!C����IIIHLL���'V�Z%ruuuE�>}���3<{�m��������r�L����w����g���U+����]�6��EL���������]\\��*���h��1�����NJ�C����?�j5���`bb���`�Z�Jh�����`��Q���@zz:�����aC|����6m�H�k�ZT�ZNNN�U�������-Z�����V����777�����������E_r��[���g����$c��V��7�a��2���j��z7�KH�#�X��C���b��y�6m�;)?##�����E�����{H��%$$^�����{�n���^��GBBBB�IJH���Q#�����;w�`����������+�g�������iXBBBBBBB���(!!!!!!!�C������@aa�_Zfnn�(���urrr�������P���H���l�h
$����F~%���/_����]W�oMnn���oi�r��)<x�&MB�Z�-Hb���8u�rss������{��pQ@��022���k��I���
���q������wI��
1a��;�[�l����
�Fooo�5J��Kxx8V�X�j�����#������6��B�������+W�����=����KKK����������q#<x�O?���<�����u�D6����S�Nz1���-�"##q��������;vD����mfJC1118|�0�={�R	///8Po���u�PXX�F{	�&'';v��������+++�n�}���T��5j <<�g�~+��;�V���w������=u��<������7k�C���)SD���,[�]�vE����{"!!����(((@xx8�v��%K����"��b���(**���%���kx{{����'��j��aC!&�������7o�s��(**B���1u�T(
���`������z�����|�r��=IIIo�:����l�/_�D��
q��y��_����]�6��C|��We���9�5
��/�����x,Y�/^D||<���q��L�2...��c�(����Q�n],\�j�VVV��>����������G����?G�6m��G��y5k��V�����o899�mj�c�l���O��>'N����3������	$''c��Q�����7�J9����q-[���3�i��2��3�F����!���0v�X=�������/��D�>��%����_�		�
x7H��4�;vddd$���+�t]���
���$�B��%��RRRX\\���B���k��������/���������������u����������t�-77���������Ua�V���I�7n� fdd��355�EEE��-**�������D���i����B��K�����$���D�����|��j����[�F�aJJ�^�U�V>|����������~~~l���(L\XXk����;77����������Y�u ����7�=//��^��P��g�R�P044������7n�`������#��V����mll�����B���Q.����#�=  ��Z�����!..�������������#6l���k���w
���"\�����
e���K!��I�����/�p�����g433��_~����l�R/�.D��c����;��Q��m���V�Z�0�$�q�F����N�:�4���w��������xWH�p��q����e
���3**J�����
g��%/X��U�V���mll�k�.=�k�.�������r��^^^<w�Ir��
�Q��>}**g���T�T���'I�]�����422�L&���/�_�.������/'O����p*
`��m���s����2��2���(��]K;;;��r���'��=+�����O�>444$*�J���277�QQQ411!���s���$���dQ�TR�T�z���;w�Pn||<U*�o�Nkkk����
2D�%�C���������������MLL���H��}�6��mK�\N�\N333.\�P�g�������B����)?��SAfeeQ�RQ.�S�TR�R166��O���7D~V�\I�\�'�&N��v��q��5Z���;@��v�J'''QL�W��o�.��c��2����������u0))�����~���G�3f����Q��_�L&�2����ijjJCCCZXXp��
��|�2U*cbb��caa�]�v1??�#G����1P�P�W�^LII!������c����ECCC*
������C��R�x��5Q.^�H�J�3g�0==�*�J��di����6��/S���{l��Y�>t�5�#F��?~L�J������X��������Y��
I�C��eq��5�d28p�$y��y���K��h������;������P(8o�<��j���p���477gJJ
_�xA###FDD��������=p�e2W�XA�F�������u��z�*#�w�N+++.^�����<q�e2]]]�e�r��� �@�^�J�\���S��033�}������ ���}6l��III$���X��[�}��j5��YCLHH`NN�Z-[�h�F������j�<|�0�J%����������p�����T������;w�,�:cbb(��x��%��j���
�G����qc���255���[�l!�%�����5k288���������q�hbb����������4i���E�~�	f���E�c���������\�v�	@]�>z��422��������jibb��#G�$'O�L�BA�Z]�9�g��\.��u����l��I���[��$��W�
��������'I&$$����3f��Z���/��W/
�����
�X�t)�������E������o�|6m��C�%����������&L`^^

8s�L����,**b�Z�8e�Q�M���u�����������?���W���(O���p��	�|�2g�����p���W/_�:u�y�f�X��2  ��G�&IIJH�#$���LII���;�������imm-z�;vL$CCC���*�Q������}��M�6B���/�T*�r�J�%/�W���G�@B�����tvv����Yx��%/CCC�g"55�'O��^8p�����%��s��"�����u�h��_%=zTT���{�
6]��i�Dy._�L�2��I�����U�����NL���V���s���������L���g�z����hll�7<c��2�B��|�
����_�d���A�~��T��<w������];����tyx{{�c��$K�?��zy�_��K��k�Zp����������I���9�J%������7o� `"""�T*E�H��M9n�8��������!�!����L_����R�R��-h4Z[[���9shii)��j�������~�[������-}}}�����>���F�����p�$y��-�d2>{�L��Z�����$)	@	�w������;w� 00NNN����{bb"\\\``` �^
L��5k��^�Z5���~�����'�����={|XYY��?�022|������	�-Z����7���2�LH355����p,��`bb"lQ�fM8::b��HLL���/���%+7��I�������A�����sg�j�����p��5��={#����q��}��������=r��6m
�J�����

B�v�Qf��k�����?����dff
����xILL�L&����)��[������l�2DFF���WH�:u*����T�-Z�]K`` ���{�\��T�G��
y�ry�������l��ragg!���Me����1c~��'L�0qqq�x���F�����a4h�>}��S�Nh��=�T�R����X(�J��3GTOCCC�y=z4>��3DEE!88��������f
�
$�����ySh�v��!44������'�9///X[���� IDAT[0}�t���fff���������������"))	�n�l���055}�r�����|	��6��s��I���������~+��";;U�V��X��|u�p�~�����������s�NL�<��mC�^�`nn.�(..��1b���������X�d�p<s�LA���FE�l=z������G`` ,--q��]�<yR��[�na�������������;w�	��m������"���������9R�J�={� $$�&M��E�**yyy�����0����j��8}���';;
�B�m�Je�~u��j6�O�������eK!-::[�n��C���������j�JU����&P�J899�����5�����a�C��"!!�Z���[�w�F~~�������P(���=����Zn���\]]���Xa]ue;99(��300}O���0`��_�	&`������B��M�l�r��=�_����X�llll��O?�m���M�j���dz�c�n����pttD�.]�~�zc���������+�~��������}����#p��EA���9r$������x@QQ222��u��~$�
�?�����j��OZ�Vd+,,��J����>���h��_~a�j��x��2�2d5j$���}[4�N�:���������?N�B���h!�y��		������&K�}��"����0��G����
��VN�:�]����Av�����ptt40..���������[nn.���x��)Q�}��Q�T������-�(=�T\\LSSSa8r��U��2�����!`�F���`�������zu�4i��)o���9����O���0''�����(Q{�V��Y����EEEtssc����������J���?.�Oll,e2���[����Sikk+����!&�����
p��������g������[��������Y�?�v�b�*U���SZ[[s��m�=�<������{!K����-[XPP�j������$K����?�.�o"
K�S�����������>���3����Q#\�~����-22R��U�V8s��h����\L�<qqq�-$$�����?���
]�t�������/[jj*&M���O����VDaa!lmm����B�X�@�^�������899�m��Bt]o�nX�I�&044��-[De���z���:u
U�T��=����g�}V���u=��7��������'�l���>,�9}�4�={V��o��'O�DLLL��e_�5X2�W��]�fff ��s�V�*b���HOO��a��az7n���q���];t��
@��a�.]0}�t�={V����|�����`��J��K�.�x�b��A�(���j��8p RSSE>=z��C��^�z9r$��g�����������%����'f�����8���i�g���}��ckkkt��M�__���U+��yW�^�a��i"[��=Q�fM�;EEE��������}�2}����'p��M��s�N�y��8w������M������G�����E�@�RoKHH��>�����":�����3`nnkkk�X��}�=z�[�na����=<<��'� 44�/F�N�0x�`$%%����P(�<�>���mC�6m0r�Hc��M(..��������prr��%K0~�x��B������
�_�����Q�����3f���{��AXX��;�+V`����h4

��Q�`kk����[�+W�a�n���h��=����>�3f�@RR6l��W�b��-e��u������_��Z�d	���p��Y��l[�j��!.��������
C�=p����jt��
G���]��v��a��1�z�*�Z-�����Q#a�U���0w�\��YS�N�K�7o^�C�o������prrB�.]P�zu������h��5"##E�|[�n�����m[���������8~�8rss�m�6������_e��9h��]��jgg���h���...���+,--��������������o�~�0g�!44����p�lllDsG�����������sR�Z52���>���S�Z�
�'O�T���C�a�t��aaa�����;�����^�J�9�}�&N�###!����pvv��_~�i����6���������GNN���($$AAA<x06o�:YYY��y3�O��[��[�.wh_BB��b��'�|��+�O�$�\��J;;;�n����P�T���������P�vmXZZ
��t{{{4i����FNN���P�~},]����E�:u`hh��������n�Bvv6z�������6�L���b��1�y`��UChh(p��Mb��AX�t��R)**����0��,�j5<<<���#�
��qca������GGG�h�NNN�s����1s�L���
4@zz:���0f�899!..>���9���(�i���Gzz:����&M��U�V���{{{�X�m���#���������SD�����fff���	z����36l(�/333���O�>��G���m[,X����������!�5k��}�������033�W_}[[[��WO��YXX�-Z���Z����������J�B��-���W\\�
�pi�ZT�V
:t�0�777����������/����z��a������OEa�������G�6mPPP�/^�����U�����j��[�j���4o�\XpPj����c��n���������p,]�T$�
���B$&&�N�:����Q�jU4n����B^�J���|��W"{��m��I<x����P*��5kF�]�t�\�!C�����o���/��}{�\�VVV�k�R�
~��G�^�Z�&$Q\\�r�U>����P�T���D����{���������4h���q��}b��yB��3g��k�������;(,,D���Q�n��|eo�L��5k���	��5�X�o�i�����C�~�z�����������8v��;)_BB���4,!!!��8z�(N�<����������#���h:t���8��_Z�����R�����[�[�n �>�@���WB�����������i
�"	@							��60�0$����Q4�����\dgg��e�Krrr�p4�Z-�E�g���+
�7��M��x��%


�u5��������/!!�v����!�N����1i�$���@��}���8u�rss������{��m8�j5����v��JoL�66l>|�3g��ee�K6l�	&`����h4��e~��Wh4x{{c��Q��N����b�
T�VM/�����...��a`��=z4���q��������GX�~=���`ii��={����\�7n�����������{X�n��fbbt��I�*�4iii�����+W���t����w/3TI�������x���J%���0p�@������[�����v#oJNNv������#==VVVh��5���S���k�����p��=�����Bjj*��]�[�n�f��h��
z��-Z�R:�J�B�.]��G=_��5���C���LII��e���k�
�"���x�H=���������k��X�d	RRRD�$1p�@�9EEE�����_
oo�7�D�Z-6l(
��&$''��s���:u*
LLL0�|���
=BYYYX�|9f�������R_|��W�a��8�<������h��]			�������2�?w�F�����WXN||<�,Y��/">>���8s��L����C�����[�..\�Z
+++��}��E���E�i��=���i�=z�����Y�&�Z-������	k�����c6o��'ZL�'N������������
���1j�(xxx���o���5�������#22�k�Fzz:�1c�y���wwwl��vvvHKKC������������en�=f�,Y�Do#u		����z�9	=
��;
qm_�C<x��`KJJ���W�X!�[PP������P�4999z�e�(��������W&pi������.����277��s���*L�j�z>I����X��)..fjj*�����[TT$����?����$�i�&������K�.###I�x���z��j5����-[��0%%E����U+�X���O������U+j4��5j��������+���hffVn��c���1  ���W�x��Y*
���2??_�����]�6}||�6�j���������sPXX���0��r9rD��.pe������)�v���l<z��
6d��������B����`aa!�\Y^�|����?�����/^��r+CHH===E�y������IR�;]�{�;w.
���m�6��UK��$7n�H{{{��S�O�������L�������xWH�p��qA�_}�=~��QQQz��mll8k�,�x���Z�*���hcc�]�v�	�]�v���������������s�H�6l`�5���SQ9�w��J�b||<Ir���ttt���e2}}}y��u!e`��}9y�d���S�P��m���������d��d8p��E�v�Z���Q.�===y��Y!=55�}�����!P�T244����������	�������'I&''3((�J��J����W���s�r����R��}�vZ[[���](o��!3f�����C��,..������kFGG�z��@:::�$o����m�R.�S.�����.���?mll�P(hjj�O?�T�YYYT�T���T*�T�T�������y��
���+WR.�����'�]�v\�f���$�c����]����I$^�.��}����;��_TTDwww���
��	���$FFF���w�Ir��1433+WD]�~�2��K�,l��=MMMihhHn��A��/_�J�bLL�^},,,�k�.���s���466&*
����)))$��t�<v����hhhH�BA�><t�U*�]�&�����R�x�����S�R���,����y��M����#��W���/]�$��w�^�����Q�8b�Q���S�Rq����W��;�������.���%$��|��'����k��d<p�I������K�R��099��;w	����S�Pp��yT���������inn����x��FFF������o_��{���d2�X�������7���#��UFv���VVV\�x1���y��	�d2���r��-,,,����	@x�^�z�r���/�F�aff&���C;;;A����l��!���H�����[�.?��#��j�Y��������j�Z�h���5�����jy��a*�J~��7$�����������V����w��Y�u���P&����KT�������'�q�����ejj*5
�l�B�K<??�5k�dpp0sss����q��������z;9i�$����z�J�����l���������v��7���}��

hdd���'��K������#G�$IN�<�
��j���sf��M�\.���������I�r?[�n%I:88�W�^^�����oO�LHH���g��A�Z�/^�W�^444���
$��t�R���3//��-���-o��-�l��)�J��M������1'L����<p���422����YTT�Z�jq��)�:L�6�u��eqq1����������k/�F������������r�M�0��qu�������c�V����=�$%(!����;��0%%����

l�'O��������1�

������qNN�H�����m���_�|I�R��+W�,y��*�=zD��������.������,y1
=���<y���������,��;w�������[EC����+���������{l������M�&�s��e�9�L�>d�Z�&�tbR'��Z-��;'�_���?��c���={@�����Eccc�!�3f�Y����ohdd$���%�����g�Z���s�����v����U'������;v$Yr�����c������X�ogX������S�N�0_�^����L������T*���-���y�A�R)�G�6m�q���$��OQiff� �^����R�R��-h4Z[[���9shii)��j�������~�[����b��ikk�����e�}�v*
FEE	�[�nQ&����g�m��Utpp`ff&IIJH�+�Pps������@899���~����pqq����`k�������X���Y�D�j������J�����O�<������8p�����
~���������4��_M�n�������������puu�e2LLL��"j��	GGG�X����x��%�<y�d�&P2�<88>>>���7:w��V�Z����]���gbbb{||<����Z��qc��G�A��M�R���^�pAAAh��"""�,[w�>>>�������La{�/�����d��155���C�~KC�1�-[���H���
iS�N���;���*��4-Z������@|��� ��F����V�����J�/��mQ��u�&&&���&&&B�������:3f��O?��	& ../^�(446l@�
��Ot��	���G�*U*��R�9s���ihh(<��G��g�}���(���sx��1BCC�pYZZ�i��������'�n�:,X�@/����1s�L�^�Z���#������5 !!��O��={`ff�����,�6\����&�v����(,�v*�~I����e}K����Q9~3���|	��6��s��I���������~+��";;U�V��X��|u�p�~�����������s�NL�<��mC�^�`nn.�(..��1b���������X�d�p<s�LA���FE�l=z������G`` ,--q��]�<yR��[�na�������������;w�	��m������"�����������#G�\��g����`��IX�hQ�B%//OX�T�Z�O��dggC�P���R�,���Z�a������8~�8Z�l)�EGGc���8t��l���j�JU����&P�J899�����5�����a�C��"!!�Z���[�w�F~~�������P(���=����Zn���\]]���Xa]ue;99(��300}O���0`��_�	&`������B��M�l�r��=�_����X�llll��O?�m���M�j���dz�c�n����pttD�.]�~�zc���������+�~�i��� �����AAAJ���'b��-���� :��3C#G�D@@(**BFF���Q�n�
�'�����c{p��:G�����}j�Z+�fBa,�%���n; ��T4��/��Z�j\�xq��2��5�n��-

b�N�^[�q��������?�B�`tt����ys�������<��c��7���&� �VA�QDApWP�����.P�Z�����]�.V[��R�z�)U�^��Q���#��DY�������/�C��Z{��y�<��3g�����23g���;��x6Q�~�(>>��fjj�
�;v,�������~r��A�j7o�$www


""�.���L@���Z���~�+�������������'O�$�6��&��V?4���[��R�$###�;255�h<�jee�npKK�d�����-ZD���u�7�I'N$+++���os{FF�����%�w��6�+
ruu���.�Y�\�t)��{��,���+$h��DD���@����<����.`��~�������N~����������/���vh{LLyxxh-O���c$�����,--)--�����T*����[,��'qqqdkk���Q����t��i"j�
���������u3/+6��P}}=�O����x�9������_~�����������$lEIDATyy��o���7�ZCC/^���b.-22�}���������DMM
����C,Z�eee�\�#��`mm���d2l��
@�\�---X�`�����	#F��V�P�����������������g�����={"���������=���~�'�V�0>>����;�����.��z>}�4����s(//o��;v 77���m��m���:��{������ "�^��C�����W���
�f������^��`����9���X�l���x�e2���QRR�f��6������������:Aw�N�����(--���3������(�����2\�r����oh��xxx`���(..Fdd$�m���8y�$����������i�����\�t���P(��o�������kW���A�P`��	�6u���dRRR�t�R(�J.=//���\Kczz:8���O���~��y(�J����b���������u����:4�6�0�k���)
��9PYY	HJJB�.]`ii�m��a���(--��k�0m�4���{�Frr2����>�����>}:����{��AWW�����4��� **
b���R��;����9d�899a���X�p!o\���4d���5��'����fff���


Ebb"���`ll�'N &&�����m��|�r���`��q�;w.���q��-|�����{7p�w�/��Q��p�B����HJJ���w��O\�t	�n�f����
___�
j������C^^��D�C��X�|}}��sg��5c���O?��\���`dee���c�<y2F����X\�t	*�
���?���$�B���W�k��HHH���f��6o��Z�~�p����3NNN���	��������Gzz:�����?Gxx8F�___���R��|�
����1�P=��V�Z��#Gv��666�����)S��G������+�����gOdffr��S�L��U�0n�8��3UUU����`ee�;
���X�x1�N����b�����F���QVV���T,^��C�g��� 00111055��#G�������H$BTT��}���������������7n�o���g�u�V������~~~���Ezz:��I�&V�XSSS$''k��d�dggc���Z���y���m�z����p��EH$���`�������D"����
���:���������~������b��'OF}}=���777l��	B�C�������0{�l���k�J�����y��`mm���X�80CCC��3B���d�����M����B�@��=�qNm������7������d2xzzr-
��a���{��4h���PXX�R����c���pwwGUU\]]'''�������6l��
D������QUU;;;xyya��������[�PRR{{{l��
#F�������[�����

���W���Z899���T�:9;;�O�>\`�.���aaa(++Cii)F��w�}C�Acc#�������I�&A__%%%066FJJ
�����������d4hz��	�J�T���{k�E"�`���m��S*�055mw.�JCCC���������+���aee���H�R���`���X�v-o�<�u�����������fTVV������HMM�xG.������u�x
8�{��#lmmGGG���r����Dl�������b��i��d�s�����e�t��	���pvv��J$l��)))��#F���7o�DII	�b�
��7�C�utt0c�X[[���������Q��{�nXXX��M$a�������x�	A�T���W��J��=P^^###���b����C/UUUpwwo������7n ((�^�L���������y@���]a���=��0�m�Z��0��o��S�N��_~i����)22eee8s��9>�0o��a����Ann.>��C�>}�/�����8u�������?���g��kd�yF���AD�������W!"2���X�l7��a�I,d�a�y��i`�a�a^2,d�'��rTWWkL���UWW�V�_W]]�����������y�J�Buu5o%��iR�\��0L�X0��)�J������	����kjj�w�^�s�������I��`��A����hhh��v�y�DHIIA||�_v����}��EEE������������055��1cxK�UVV"00.\@\\�>��u��]888����BHH`��Y������gO\�~p��|������@�=0o�<����YfSS��[OOOn���9r?��3/���
�{������^�x'N�����!������'k�����������GMM
,,,����	&pk���v�Z�7�[Z�a��X �B���K�,���{5�����w����w/,--!�J1c��������w����V�x������	333pk0���������X�f
�����t�.]�����gV���b|�����u+���o��6�����rt��
G��������,c���X�no���dffb���())AII	n����G�",,����w��W.�c��������_
###��b;v}����%K4&��������|�M���������x��W��Orye2��_��3�0���W�7�0mihh JNN��R�^�4���7��������K��y3���
^���n��jhh�(��������T�>|H��������m���S]]�Q���Jjiii��
�B�v�L��Z��V��1c���Occ#UUU�[W�TJ�����1c���+��u��>}���rMJJ"}}}��6l ###�eK�R��nll�����+���������K��RRR���������^{�5��sss�������K111�~s��!'''��������	EEEqiK�.%�HDG����{�n��{����������}�z��E������^�x��-����jhhh�ZH�Rz��A��A���c[����fz������Rt�$��gk3/2/LSS�����E����g�E�sss	]�t������(  ����>����|��r9%$$�D"!�D��&L���P��GFF�������####200 �?>/�{ZXUUE�����4h ===z��������gO@�������r�566RTT���������S�����P��}	 ��gEEE����D"�H$���CDD�N�"www���#�PH�z��3g�pen������i��m$
i��5D�z�����7��w�}G/^��kZZ���r����N�:�@  �DB			DDt��A���'@nnn������R�h��%���G������@��������� ccc8p UWW�W_}E�����������yiR��i����@"�)S����;������%&&j-+22�����`��W_%SSS�z����O���#`^^���k}577Scc#��

%�@@h��a�����+W���!CH ��������o�MJ���Zq�DBg����������i������g�R�~�HOO�tuu����N�<��������������0/
��m`[�z�-277�Z����.^�H*����9C����0))�������sDDTTTD...4y�d""��o	�B*//��S[[K�����������t.\�@����p�Bn���uuu\����G2��-ZD"��F�M���HS�L��]��J�""��+W���1w�������%K�QKKu��������r9����]�r����)$$�F�MUUU$�����kd``@K�.���&jjj�%K��X,���oQk������3g�����Z�MCC�v[x���G...D�<'''���UUUQCC]�~��B!�]��
I�R��� �%�����:DJ��n��I����f���!����5xz���������y�������O*��O����4|�p""����	}��wZ�R���������������=���Q����D��������$�\N����N�8Ar������S�N�r�J"jm������GSEE)�J:q�	�B�����?���������r9�G@zz:���v������O���$��h���$������s^X���X��-t4���$�HD���,--)>>��/::��r9��U�xy:D:::TQQA555��S'��c�}���$�����*++I(����yel���

���i�T*%\�)�o7���nBG�%���C""*,,���W��Y������Z��ttt���>��y�g��	�.�%K����%��Y.��D"�u�����y��jIII\ ������.eeeqi���'ccc�}ee%��������9C�` <<�<<<xe?�H�[��?��f]i�����+����O�:E&&&t��"�?������;I ���;�smk����_�Nh��=$��pi��2>�.`�BA(..�����G���'"�'N�|�2/OHH
8��~k���{7/����r�������R�${{{Z�l��:�?�����-��������p�Bl��	3f�477���...���?ay��-H�R\�|o��&�����T*��y���q���������0v�X������sP*�������R����444�������;�_~�%***���:���+�o#��E�{�������pqq��s���[o���G��������011���3�p�Bddd 00!!!ptt��y^�rzzzx���x��q��^GG�������������,w��
X�z58�����755���v���[�n���eee���z��;wx��O�����&L�T*EVV:u��������#%%���.h��O>�.��v�Z��$����Mw�~DGG�������T�6�4~����{obb�T
��A�P���{�������O>yj9


Z�s������@(��s�������;!-]���_��?�~w��\;r�K3ws���'�;j���j�y�X��WX�b�m����c���\�T*iL������h�s����!&&��722�������!�����#�SP�����+C"� &&��:<x�/_��������x�y��������c��������A�`dd���Z�S�@DD�9�u�����_�������s��������@XX�&.�!
��<����5�;�R���X�8q�������z>����"00��
����ann�[�n���3\�T���4�kSXX���`���"33]�t��-^�9r$JJJ��uuu())������acc���_�{ccc8::BW���N���7n����u����y����H[[��A7o����s��y��X���q��q�VOO���� �������sCCC.XW{����T*��\@@����?�T���}/y}=��I--P���4Y]t�����E` ���b�
������0`o���tuu���#^��������C `����7o����k��8v��B!$	7����`���;vl��}��q��'o�Q[[���wc��������4^>�@������@�R�����<y20g��r���Q__�����U���X[[�Z��������op��y�V��l������Z�222xy,--QYY�K{��jSRR���G#$$�v��\N�:����6����4i�*�����������5
b��~����x������c���Z�@8x� ***`aa��???%%%-��9~�8rss�n=z4�GI{������^ ]VV��ZP���q������S3������}����A����9��_��d��N�>���7���������'�������^�%���+����>��r��9$''s-a"�������G1}�t�5���6668|�0��'O��?���gz�Ojii�R��Mb\PP�S�Nq+_\�x����\.����?����ZDuW:.\�&NZ[Ip��U��������?�����C8r����;����YYYq��B����[���@������������-��0k�,8�������I�c��W`` bbb@DZ����H$HLL��}����r����������@RR�B!���QUU����m��5�9sf���i�&���k}���v��A�h��QM.�#33�`;b���(..�?���)�J,_�\c�m�a�/���0�����s'���K(//��i�'N�����r�Jt��7n��?::HHH��30i�$xyy���S����/���w[�n��1c���`���!55����1U�������R������t�P�;v`���hjj���#q��-����������`ff�!C� 99UUU�w�rrr����w�y)))�6mrrr�����'B__�O����&N��������FRR����C�������011AZZ*++yc%GD�������y�+V��D"i�uj��e����H

�����333|����;w.��=��;w���,��]�0j�(L�0����������?�������m���������X�f
�������c���P*�����QTT���dn�)t�����3���`ff���Rdgg����������s�F��@ ��Q�gyxx >>����q�,--��_@�P 99���L�2AAA		All,���q��q������^{f�e��������0�Pee%��� �H���oooH$H$���=z�@uu5������_������-���ooo������s�����3ahh___�D"888 ""���(**�X,��o������T*����`T�W�^�8q"���p��
t�����/�hnn���/������B���Q�`ee�5����������R� �1z�h`��IP((**���-�l�???��bTVVb��1������0`RSS��Cooo�T*H�Rx{{������077��k�P]]
���fffZ[---1j�(���q��U����s���9UUU������4`�XXXh����WWWB�P !!�&MB��}Q]]�=z��W^��q���w�����?��R�#F���*�
zzz������1�R)LMMakk�Q�]�r�pssk��9;;c��aZ����� $$�'O���7L���;v���	4�qpp@\\PSS�T
{{{,_�6l�u�ZZZ��W/t��M������:��d2�5����P(��W/xzz�����o_��s���>|8�������ux��A��g�����ADD���PXX���
��� 55U�X��M���+��u{!�g����0�0��ac�a�a^2,d�a�y���a�a�%�@�a�a���a�a^2,d�a�y���a�a�%�@�a�a���a�a^2,d�a�y���a�a�%�@�a�a����Ei���]gIEND�B`�
select-only.pngimage/png; name=select-only.pngDownload
�PNG


IHDR����#sBIT|d�	pHYs���+tEXtSoftwarewww.inkscape.org��< IDATx���w\W��������@�7���9��4gjZZ�\-K�jVj��\����,-��4s���-"N��|�������GPPA����<����{����������yk��B�P(
E����
P(
�B�P<]�T(
�B�(c(�P(
�BQ�PP�P(
�����B�P(
EC	@�B�P(�2��
�B�P(e%
�B�P(�J*
�B�P�1�T(
�B�(c(�P(
�BQ�PP�P(
�����B�P(
EC	@�B�P(�2��
�B�P(e%
�B�P(�J*
�B�P�1�T(
�B�(c(�P(
�BQ�PP�P�1��U+n3J<��7�����'j�������V�������X����I�&=�9��7�|����������qvv.n3�|QP�(&�t�������6��X�v-,�o0�����y/^���a���<%�W^y�+V<p��Q�

�o/\���^{�i�V�|���������P(JJ*���+W��y3Z���M)46m�d���W�/��r��w���:t���S����0}��Bm�F�t��-�}Z��~����X}Y@@u��)TJ*+W�$,,���0���K��?�����6EQQP�(v��E�-h��}����x{{�q�F�6m������L�<ooo����P�#F� !!A�^�=�1c������^x���9�������M\]]��������;w����w�w�y�J�*ann�s�=����B��$$$0z�h�������E�z����kY�|9�����q�3f��M�s67774
^^^L�4���t������f��]����������@�>}�)���/����	�����3g�����aP/11�Z�j���?�s�N���9z�(��7����
*0s�L�c������XYY��MN�8�����7��//��l���:P�~}n��i�?$$���quu���>�����ooo�?IIIx{{�i�&�-[F��us�����T�Z���dz��E���h��9�g��������i���%���L�2E�Njj*���|������w�2d�<<<����Z�jL�:���L}������ys�������{��\�~��m���2d�*W����)5k���~0�s��Mz�����t����W����MPP����~��<���'i��=VVV��������� =�u�����s���caaA�J��3gN��u������*���	������WS�R%����gh�BQ��B���{���1c����-�^�*�Z�D�����������B��;W�����������w�^���,>��}{������[6L����c���j�����[!�HII���b�����+����b������L���	!�4h��P�����EDD�X�j�022��/���w�����[���G������;;;q��
$*W�,���#�;&RRR�G}$�V��?�S�N���S���GDEE����1~�x!�����j��/��]6l&&&��/���^^�~]���ST�VM;vLDEE����W_}����U�����[�n�h�����m��x�����O���������^���O\�vM\�vM0@8::���o!�������6lx��%f��)�{�9���.&O�,�����w��-Q�N�q�Fq��1s�L��7
!�X�t�Djj��8�������B��5K�����{zz�	&���d�j�*�~�A�?^!D�J��'�|"��{��U+�c���'�B���*��k�?_^4H��SG;vL��uKl��Y899�!�8p��011����o�����[�5j��_�o�!�5k�o�u���f�����C"""B,]�T���W���S��h���8v��8s��h����^���j���������O��'��;<<\899���q��Eq��-1v�Xadd$8 �b������H�o�^�������k����FFF�{:m�4���$�b�����LDFF�����,����W_}�/;z��x��7����h���X�f��j���
Ea��BQL���o���B!222 �u�fP���#b���e�����o�l�RT�\Y�����>��s��� ��b[�n������N����B!v��)���op???��o!��q���h4���B$$$�#F�#G�!��������~Nx������Q�����D�!C��i������OwQ2b�Q�^=����K����������x��W�Bl��Mb��y�x{{��}�
!��0a�ptt)))��III�����b4/F�%,,,D�f���U�DZZZ�::8k�,��:u��������q�B�:uJz1#�����_~�E�?99Yh4�p��_o�f����.^�(����Bt��]�����t����M������#����1�z�M�6Bd����`����`1z�hq��M���*����?��9s�������///1h� !�������m��u��� V�Z%�0�������Z��=[_��
���#""B|���b���b��b��5�W�xRL���Q�P��M�l?���\�r���V�������
5h�ccc����3���������O���y���h��5�����o��=�����7���233�s����S�N!��A��c���
�>�nl���_.����;�����������������O�����+3fqqq������W�k��q.[�\���_''�\��NNN�?���l�����[�x�b<<<Z�I�&�u��%88���*5j�_���������}�~�5j���t������S�jU����@*U���%K��������t����<00�3g�p��5}yzz����<yKKKj����_�F
}RZZZ�v����������������u�r��%��9����@��������{���?���z�j:w�lp���c���4k����?���g�|mW(5P�(a���l/\��j����_ajj���#���������s�N�O�NPP={�����`��;��C�-8y�$���8::bnn�?^7������+&&;;;�r���N���0���h�\666������
6���@���
���Y�����_gjj*������_/???���/&##oooz������X����`������Wap���Q����#��{7vvvL�0���+��cG���y_���r�������s����
�1�9������];�������������\�=����w����G����C�r���;Fbb"[�n�����U/%%��___�t�B�.]9r�c^�B�?��P�`���3f�F�b��y��.��KKK��������g��E�;�&M�����������4h����~�M��{8GEE�����J�*�����<""�FC��
���C��x�bN�<��U�0`���u���
�S���zQ���IRR�������S':u�DHH.�k��xyy1z�h���g &���
�����|��@�����0��$��M��iCff&{��e���7�������I��58���}���y
D�����X�Zm����xxx��?��*���|����^x�z���r�J�4i����Ad��7X�p!K�.����#F���o����P6��P��������$''����/g������q��	�hN[[[������n��w'�<��C�8u���<
6������7��DFFbee���+
l����kcnn�k.��;w���������^___5j��9s��{w�^����K���j������{�5��������|�����B����3g7o�d����??WWwN{222�����]�:a�3r��?�,���y{R����4i������6m���];��~~~����$''��KHH`���DFF�j���###6n�hP�n�:���l�����m����g�,--9y�d��������pN�:�/������C���>��0`��!�Y�������o`b"}/����Z�j�j�*���=z�����*���#6e���iC�=r�����q����=kkk"##����x����={6�W��5f*/������o8}�4]�v������7ceeE�ppp�������:���`�m�F�~���s';v��c��|��GL�2���$�T���e��T����J�*�c�,X@�N�lprr���3f������5k�{�n�����[������K|����������8l�0�6mJ�Z�r�u�V4
U�Ve��$$$0d��~�m�,YB�v���������_~�����qY������3o�<�}���jmm���C:t('O��{>w�����)>>>�^����hF�HO���5��
c��A\�r��G�bfff0U��pww�������s��%�u>
)))XZZ2y�d>���\�---9x� k��e���������k������v�Z��m�����h4,Z��{��1z��\m>��s����|���DDDP�vm�=���s�S�4l��������os��9lll���oh�����G��P�\9V�XAff&�
��e��gO���G��=�8q"���|��w�����{�=���I���7n;w�4���/_���<==��}��q0����?/n#��������ammM�-�}�6m��5�"���aaa���G���������<y��^z���$|||<�����������u���������g�R�J/^L���177�C�q��A����h�����\�t���Z�lI���9x� AAA����������dQ\�z��-[bff����~.�V�ZQ�F
�;�'��w���!����]�vz��+�j����$t��S���8��9C��5�Y�& S���5���'��aC}��/�����c��?�������g���{hff����NZZ�w����s��[��K������Lbbbh��
�+W~����{7))���-Z�����m����b��z����
�[�&88�#G������9sHHH�y��xzz���������122���S�fM���������s�����S'"##i��	5k�$55���:u�d�-N�-�Z�*B"""h��5��W���z����}��q��a�Z-���>����=���#<<������W���uk.\h0���C��C�T�RE?������{��[�n�����?�p��uz���������:u�LTT�;w�����nccc���������4k������r����iddD@@��]�=�}o����C���:t777>��c}���U�����F���B�P<�,X����'j j�o�N���	

U�E�A��U���UT������
E����s9r�q��1{��|#��� <<���`��O�
�X����
���1b�����:uj�c�t���88E��q�F�L���/�������MQ�P]�
�B�P(e�wD�P(
�����B�P(
EC	@�B�P(�2��
�B�P(e%
�B�P(�J*
�B�P�1�T(
�B�(c(�P(
�BQ�PP�P(
�����B�P(
EC	@�B�P(�2��
�B�P(e%
�B�P(�J*
�B�P�1�T(
�B�(c(�P(
�BQ�PP�P(
���aR��u�O���������L�Z-������������,n3J������bd���6IIIXYY��u_����4���01)���C�����H��B�"=����/_���\\\
����tn��A����M����[���cccS���*bccIMM�����M)u\�x�j�������p,,,ppp(nS�	�Z�3�22LII����h2���G||<qqi88T��8
K�x�vRR�11I��D�/KO7#%�3����r����Tk���13K.�+,<�_����?���{��<e[b��y��N�Zh�������W��E���+:�W^y��M)U,Y��c���d���6����h�oA0t�P���:thq�R����9s`�<8�L��k���Y����vp������0}��~Z�' 6~���`�8��7X�z���9�������0h��&��7��z��M��G��{���G�U�VO�<J*
�B�(bc��B
�;w�;v��:t��k;C��B����t
��_�B0'3gJ!����64i"��i�h���k�A����|Z�7
�B�P(�''92wyl,DD��I��p��,�_���M	
���^)�:u��NP���u�H�{�dy��r�����23���i�,�������Z�f�Jbcc���-n3J111����
E�Q��'��U
�N���
��!>��n���a�LX��u{x;11�sH���,KNc��rss)��.�RHZZ����mF�$11�����6��O|||�����>�@���~=;��RG���X����#E��W���+qqRP��+��-�����,ON����lJ������1�-[���?""��>���m���G6n�����u�4h;v��>�������X��������/3n�8n��iP'==����O���O��s rj���g��gO�v�����������BMSD8::���X�f�:���qvv.n3��>�O��?�����L
��-3�
[[4�JIyp;/�{�������cu�G�@���-E6%Nfff2w�\�5k��e��U'22����o�>����G������:�~�-���������;D�&M�r����'�|���#�X�":t������o��aL�4������uk�l�B�-H�1�W�^��3�
��Y3~��':v�HFFF���cbbR��P**�����_E�����ZG�L�>�#0z���o�:�5k�[o��!��t�,�}�{_����3^}Uv
���}��=��7r=:&L�� o�}�������'������Q�_~Y���$R�T��+X�l����}��y��:u*fff���G�����{�������O?~<�'O`����]�i�������W_}����>|8 ����s��a��i�;w�~������-k ��o������-��w�e���l������iS�w�N�5X�n�{�.���P(
EI�V-)�rR�<XY��p����9Sk������~{=..���t1��������e;���]�����+W���
CJ�l��9����C�-����������>���������{�����������={�t�Rv��Iff��@����K�.l���i���}�vlmm��������m��e��m����l��///���V����l���X���u�� ^V���#^��xlj���<�E��!C���RI��-�oA���*U��ga!�ZN������->>����������x?ffR�*���uW�V������[DFFR�vm���;z�����C��k��Npp0����z{{ILL���888���������xxx`�E���������s���������'�q��*[E���E�z����R����h�W���E����7.n3�%N�GTT�&M"00��m���j������w��=lmms	7'''����w��~��:���$''?�N\\��\��y����q���b9wi���K����f���

-n3J%��/nJ%��������������K�mF���u��.Y|��MY�h������	&0x�`�����E���MLL��cll��:�Y���HOO��_���S�/_^_�������177�I�&L�8�_�������@�-��s�������K���^^�vm��9CDDD��������Q�~�cOi*(Q�������\3B�D;���z���=�%���(8p�����s������S�������������o���r��-���INN&))�`\QTTFFF899�����$�QQQ���cff���3'N����k��C...y��Y�ah4������{���1����
*`mmm�N�rSSS���[������U�VE���{JS���g�����7j�����cOi)�^�z��������c�5a_I��(����*CBBx*�#� IDATL�r��g�}fP���)����
�W�\)+�9"��~�:o����U��B���7@��������h���B�%K�SSSmP���W���W!��)S����HMM������*U|�A�����&>���|�)
�BQ�h�P����ms��@���O�6�����/�_|�������h4���k,]�T��KOOg������boo�����W7�"g������������jP�������K_��W_����e����:t���������k��w���W��l���[�n������P(
��	����6!!�����iP���G��Y������d��n�
��U��V�_|�����~���n��S�Nq��M}=�F��e�x�������j������Z�j������`��%������S�bEv���K/���o�	���+�����w�a�������s�NH������Wg���<���Wcbb���;;v,�5z�����T����"����G�����"������"���-[��H�"����888](�|����$����V�����	�N�:�����n��;������h��}����h��AAA�Y����h�v�J��=155��y��W9{�,���'!!�>}���kW}�	����y�����e���:�:�5q�D��m������?��`���F�`�s��%����,d���S�8~���E@hh(Z�����)))dd�8iR*�!DqQ�)_�<C�a������V�%88��BZ��\�|� eE�p��������,PoD80����`~�:�������{`:P���W�K�:����W�p-�������^�9t(''�W�=
����@a?����BT�|�������/s�ff�z�OvQ�R����JLLN���12��/�\�"�.��:����U+���[��Q2��3
XQ��5����qqq)nJ-7��H1w#Gy�x'k[���8����  ��
,~����Ha�<��/�^������=`��c\cq���������������hR�ED�^����w��l����&_��0t�L���..��2�o�|
��`��		BC�g�������e�_U�������P���B�P�q�m������j�%`Fv����-�WS��j�t��r��e��L�]��������@j�maar!=v[���#��O�����!t�U�B����.������~SS���
���`��W/{��#G��wo&
�a6E��
�BQ��}@y,�-����Dv�d"�pu�M���"E���84�j')]�K@��
��������n�������*HW���#Gf�����>A �����\t�(��AEO<��!�`M�5�pK E����}�N@�������5�"Q'8�"�k���cY�t�~]v��8w.��X
6��Jo��5����{����r/�'%Ab"���:��\�g;V
�z���+W���]�A�>H��@T��C	�R��.:Tp������'��A��M\���]�!@�����W0 Gyk�����U�
R(� ��>��G���89f�q���������KH����C�y32�
�Z5)�LL��A.�W��������<��������ull
?��8qBv&�����3�(J�,�XXX�s+
������"����@��N@@@�u���H`�b�����������������s`5���p�Q�(�xOd ��=�oM��gY�6�O����<m"=N��������8���v��ys�6-??������,Z����w%��-
�aE.�,��(��CE
JT�>�Dz�K�bl>0x��B>4����%9�����\���"�~?AFw��?9������\`!���e�C���S
������"l���,����XQ�(�P(e�Y����G��<u_<��)�����M+n��c�����/�p������wa��}�P�f�T(
E��������<.�I�uDEI�W_���p�������-a�9������������3�O����5*�RSS��kWq��� ���?�IOI��/Cp��+��=z�s��qa����g���C7���	�	�������\�v��;I����L�����?.��R�W�lp��uN�>]�f�J�l�R�&�����%1~�||d0F����\�(�Aw������~M�<��|���4jt���U�IDDG�-n3�J�BtQ����������N����k������J9�p�8��������7.�����������$�'����p���!X�a%gW�����,m��;�w��f���?�)���o��w1q����e�8���

-n3J%��/n���0�};o��HQ�����>�t��m���k�Pz�bc���;v<��CCC���jT
qqq\�t���(�.�R��.87���m���k_��Hn���W�����I\���K������qy�e��@FZ�7������^������tB��P�Wmju�E�1Me�QI\�y���O��K&*
��(HpI&=]��
�s�d4���r.=h�HF�����#�X���D�>
�������h/������q��6�L�`)DE�Z=j�Y���d���t[���Z��/�n����/��������y������s��/g(��<�_���^���5-'�D��%!<�k{��~V{���i:�)�D-�n�����4��fiFE�~TV��bl�Z���)(w�H!�#%Ez��n�b/(�\�]�
�@����������?,XQ(�(����
E����o6��������G���$�I���|
�y�����i���4�����Sx4��|�����_�'h]�/y�\�E���Z���6�Z�m%�������{<���yF��t��_�O��e0i��s��D����_��{\���&X�:rBl��E����*�y&����bfcF�� ��Z������	��w �.�V��b�{;�����C�7�n+��xTc�����Z0*x����}y�m2c3c�:X�Z��9�qa,.�6N#EEaS�#�����������[;;x�%9���1���r�e�����������O"���-�1������U(�J�B��(��ce�������o�r��]�^�Kfz&w��%=%���T"�D�1��M��M��1��1������&k	;�������U�#�n2��^c��-���2*
X��W:����.�m����������v�x����3gJa[����z� �k���3�"&&����Bo��Bo��BE?=Tp)�Y��r%������is����q��=���]E;t�[|��c��1�`���M�7Q�IeB���vF[\j�P��J���
�j�\�{�N�u���;i���gl+�r��Mz�����+����_;����%a�a�������42ka�T���H�\��G&2�/@2��6k��}�/G�3*�c#@Cp#WR�,�\s��(3�o��r��j�F��'��
.�oO�������89j��yvD�7������.
����/�Vy�`)DE��o�����f�$TF�����Dc,�{W��������2i3�
&��+�eY#I��D��m167����������SU'��dy�?�s��=��p�v�������~�X��x@V�d*2��k�LC����e��C?������\^����*��B��WQf	KB�l������
�dd-A�U�]gF���@
2]�.��u���d�L����������A�l�
n���k
�2M�Icd~�d^��Y�
��C����`�<	Fv��|�v�#?[ �e��a�n�y�(�cu��
����/�+�����jU�-*���\pl�����r��m��k�����Yc]�:W�m[l+��������$(���b�ON_=��7�#�������H���ud�-���LR,�HC
��Y��g��b�^�2)��1��Y�iHO�RA
�9�7�j�~��A�Z��e���9�e�~�:Y��0���c���I�sN�r����7odJ�Fr��������\���n����4�D
o�28�B���:g���U��.@9��*Df���vk���qHD�:~�c/ E�R�$������CHa��|��*�d_�K�mgd�dW
W4�G�S<��r���u��,nC�J*�:c���:bcep����fO�pq���=K?��M"�o-I4p4�u8�D���	Y�N��%Y��"�I9�O=]�j���^�+�8��@�V����g
tZ�[�=��v?#�>9����W��Ix�{������A��O��g��?
��]d��z=������|��n�{��H/�	�>�#E�'��X)k�<��G��Y����(�z�5��5�^���a�@�������Z���w�^�Y��HA����d����;Y�>�����T<#(�x���/S6�X����Q�@#{Z�(hY�4����|��D>|������*����X0�����	���D>���|@�=��y���b�)������[�B�((i!������W�����[<�ysN|
��|"���������������_C�*`��uB��4��`��e����[Y�s�G!=i���E��t�w!���Z�>HaW��_�})�-1Fz������������Y�IY�����23��,���Y�Y�\@��!�h<�>����HOu���>k��OK4���=��!^G�w�0���SQp�,�������������}&��������Mb�T8r${{��LBB2Y�$������4�+'#���!WW�
����f���6G
CU�~(:d����b���U�{h��*KA>���^9�]r���/&&�;@���^��u���H�Q���]���7��-�~={}�&9��V+�W���E`i)_U�J���P���Z(��&H�0a�N�"�Odw|ZVc�7����@����H��8a111��c�X �o�}�����'���{���Wr�]�Z���u���Z�@�����C�'������nL���]E�i4����w y�o!�cR,�^Y��D�c�R�u
�Y�$\�%2=�%��~n/������Q���Ek4�p!��*4m�t�TA��Qv���KL��C�d{+5!@&���#�C���C�x���U����X$�P��/c3*
�Udr*d+]�������H���L-��7�j�W112a�~"#����V��5�}u71���Z4roD�*������x}��^A9IC���d-u������`�C*3��qrB[�S�Udw��8��#�"�I`X!�y?w9��bE�TI~+V�Q�C�f�����Ov�&K�x�p|$���������3 �H~]��a���i�G?���w>�n#32��#���1o�h��g�I���MY����P�"m0�9�5
��hHF���E	��@	�R����)���&���o�������u����r�f<Ottm
z�6�������_��pc�?�Ho]E�lr�-�mF��@���D����2����]c���e�L���O�?���L'z\4�f�L�=O-�M�����?��W�^X���F���0d�F5U�k3C>X��������o��8,��[��'�������;�M9m�_/�����:��K����Y0j��&M���N���]�(�\�EA-���$Xa8���	h�����4+��������e�b�E���������������eE��`)�Y��5K���*��o�%����w������o]�i�{��k�����8"�b�B~����/�@�5fw������X��`����*�;���q�j���e�)�|��#\�'��K�cb����,�g�3��8���|�0t�uT��"��yJ~~2�v������J��d�2��e)M��N��4)�\��I��VVVO��(8*���y�u�����+���i�J))����R������B�.2���p����Y����z�����^���g�3r�H����a���J�dj�|�[tZ`P?.5�e��1`�f�EJz
��df���
�x>�L-�L-���L�7��8p1����(���]X�D�C<XN�|�0?.�X�WO�@��+�!�|}��
�����6��
�x�QPQ�05�����/�`	H�;r�|H������B�D�O�U����������UE��mGR��	f���W��������������/l
�j=j��~��Li=��;�|��c����I��M��j2n`��)h3����I��m�����|a3v�f��e����/���{N������hU�����.���r�_YfE3n�(�r�#E��`)�Y�|?�l���,:����;��*��?��L4��s�1as����	�"?gE���rm����@�GJF������c8]��p5�*��-�Hc���%D$D���_�����;�����.�n�]~<�#)�)��8����Le��Lh1��Wv��hX��6��P��
���c���X�Z���B�=���T�1M����������x���+�v��������$s�V�(�������.����Fa�e���+�b�x����8}�4��_/������-)�ED�X�~*(X
�E?�t�,�^�|S����7,��]^w�Bw�]����1N���������<����"�6|�>��������u� �]�!#u��&"�a���N[3[���
N�N8Z:bbd���=n6n�l7GKG-162����Dm"���#5CN�q*�U�����I����N����c��^c�����7AW^�$DF}�J��������=
�����rv�����/;&�sm�.�����pm8~�x�7� 44�����+*	].`E���@J!�bp^|�%�j�g��1O���Wc���l�y���q8������x4�@����?�v���
��F�v��>����2���$%��~������{`�|E0��'3#����|m���46���������M�M�96��|z���=4���;�a�A5������~N�Z��5���[]������sD$D���r\�\q�u�o��*r7�.���/{/�5��-�������V�\�w8o
+�����d�
3�}dd��=�<�)���q����y��(������i��S�~��,���F������xb4B���I���2dS�N-nSJ$7n@���v-4o^�����qi�%ZNj�o����\�|�sk�{-���Z����������[-�������j��g�
��`�bX<7������~�:�!���s������x4���){����x��-aL�SR����$mv���;��ZR�S�13������6S����N��ti���``�������b��OrZ��C�o��{�
�"?Z�j���{��<��(�xx���2R80P��*	�	�:r���s����8ju�E����
�yx�W����@��p���I����->�|���@Z|7�����	\��77`�n�{#w<�=q�V��*DL�Lr�?S#SL�r��gfl�����d�+����o��X��+_�����)��ZHO�icb�4.�*���L�6�0&T(�����O@�s5p l�����	�C�� ��9GJl
>=|h�U{*5��F���/��U9���f�\?����E�1�t���20`����5�5�J�*Ti'�����ga��\�t��5Ad�e 2�,�adj���1v�x��;�w�Y\��:V�V��,���B��j{��?�H�r
��p9����n		�C��h(�tK9	y�������`c#�������?���V�P(J(J�BJz.��a�4h�Z&�����w�������Ez��c���[9'��E��;���y8�j�{F��y4� ��x�}��{�N�|�0��`��S�����������������
'�\|P���Tx�W�|��?d,������*�,
N��IDD<:�z�6""�<���I]b�,��C�7]��B IDATv��z��b���g��t��G����HN�e�^y���6��q��i�lB��JDD��_�������`)����#{r��� �D�#�2�D�H�5���Z�����z�
�?/'����
NYi�����o�>�HC���+����pe�l�m�q����/~�b���^�A������9����������T����|��g�><M��H�m��bF�F��������(n�����8��	�/D�Z��@��tAcTBR�<k�dp���>r� ����2����xQ�}�
���/���@m|
�	���`s��q%����P�Z����.
X	��G	�RHqGG�hF���D�k�&�����������n�`�:�aL���?�I#�
�6���t^z�./~��K�$����y��e�p 5)���+���L�9��>��6�lIH�.]`�dh��x�m���`�����k�j��]�c�>�-���XC���(W�
�n@zr:�O�&�hW����3�%!<w_w*6�H��R�{>z~:m����TRbSH�K��q)���ph�!2��d�g���`q�����`d"g�z�1�����������o�~HK�yo�
����**
�hPQ�E��~z(X
)�\�_�5��nrO|�����~����9�)L1e�X�:z�����8��DF�Z����}�H~c1k�Bg����>b�b1y�����X�o{^����;5��x��B����������?O������Z��I� �����t'�z&�&x4���Y���)�)�:r���a�{����dff���#W�]�d�	![C�&kION���2!�n2)��B���{��������A�[:Z�Z�S+SL�M1�6����abaB��P�}�y��sd�e`l��3���`�C�'%��h9=|��R����iVj�~��=2��Y�QQ��S�Q��������I��xd�T:3�A
)���Hb,c����C�s��4�y������M8�,`3��,f���_�%���|�U����Le����8��Qn�����/���_.r2u��F\������zYe�yx��*U��w�@yP��c�j����0�; x����K��{��?���4���0�h���w�qM]m��{O��*nq��V��V�b��m�U��a���m����n�V���n�ZEDd�������q1���6���'����s���r��{i����B�R�
�qr���rKy-�w'����'x�������%����s��3����o��Au���Lv6$�XE����M���#.��o��������I�~M7h�]�c�+{P(�O��$��~+0�1����6�/�w����C���{�<������� C�`��U���y�e,����7���}C?�����}����~�~l���=�0���?o�N�����``P����\`/��F��u������� aa�����K�$���6b�
�?��������c���65O�I#k#g8+���.l����w�(R�7������p���aa����|N����J!)It��m+V�X��ic\~�t>��������jR��"3�a�m��T���Q��TT�T�]8������m���8���R~WK�:t��q���HKt� W��+�B	(P�L2VXQLu�"������	k�����\���I'�pH �U~�0f�TV���M���y��G����<b����^���g�v��[L��!����t_xx8nnn888�{����t�h9{Vt�>�X�[4/��yAd]�����l�}�>�|��D',=����1�^a���}�
(9F������q��f
���k��l�����O��X���c'G��nK�����q`��	��7����i�$M"~_<���S�]J���pr�s�'�]%Os����hw�V�nt.��A��w����{�pM�H�3�1�
*H#����Y��3>��� ?��'�B
��<����X�j��iO{��c�`���y>~�#S:������g� ����L��:�(,��c���D���||
�nqqqqXXX4(&L�e�����D�7����d���32C*(T
jz�U�t�I$��#��A����&��
@OM����c-MtY4����Ggyg^{�5�-�q�p����^���
�a����z���`����N��'�%�U=��`mM�~�����xc~�Q��x�pJ������Y����
@U�J#�bv�P�S�W���n��
X�At��H�]Sn������������y����:�r�]���-��SJ)��)mh���
 �\3����sx�"/�����"a�	��
 �	~��g>W��C<�;���G�_����L���Z�%�TP������UY	S���Z�z���?�����K����_�|
�X*��������c����I���l#�2WyG�9�����D\��E���#F��������:%��/�qs���1�������+�JF����������$�'�\��W� 1?���$2J2�3����WsW\-\q5w�u����2nL����?�K�EVn2r�g��W~���i�r�T7��}Qv�Ef,����b���C�%%���#��X������
�Q>��e\���
���g�H� ���\�����<�\���|��.`w��������&��)�,a���Z��7SDG8�	���z����m^xq���0{��ds�V6�&���S��!E�q�cE���@Y��<{3�?a@C���WWb���w��A���)(��?<H���p>@aY)]��n���c1!<�=��O�P��� ��9Hr��%��s�rF?1�� +��
��:�w���)�vJ�C;[:9v��c�[�S-�I+J#�@������k�I.H&�0���|,-��[h��=��)\�]���S��,��37W���b��7e
;%�����<��K�^�����;g/����������D�	�Z �x2W����?�P�Z�W����g���j�f�&�M��u��U�,�I8���m�8��ITe*�;�S�TA�*��]��V��s7����A��w���3��g0`(���z�L�J+��M�������G��XO�3�	����1GG8�q��I�y��l�7����c�>� ��9���� �<�C	�8p�� )/�H�Dh��L(
�S�`�Fq��[7�i
�XX�R�^aj
�:�K������9����*vv����n=��,�\-��~�R�s\�]�����}XH~y>��B
�K?���{�-���ek�X�!.�	g=
��{�Y��KE���irK9��J%�*Rs�{���
�0�3�������}�>=	������R��������(j��r�Yr�z�/gFe�����x)t�8U{�K��*�J�Te����*)(/����be1)�7��6��������|�Vf�p6s�\,�u-�����O�GQ��RU��LEiN)�Y���]���[1�7�������L!:t�htP�����O�(T��������O��M�
;'�������0k�X��&S�����a�E
��)U���|�9��v�� �A�����N|���G�/��c���o1�I�hb�$�	[V&��+/o�����b^��=��>>
�k�rx�E��q+����A��-yOB���J�X��5G���\oszu�����h?�I={����Zf�mK��z&�Y�g��P�V��(���+B� �2,W!T
(
D��(ETyY9	�	$e$��U��T����RiI�����l�j$	�m�����@|"�
���I���1�7B"�h�����2�S1Qa\n�v�b�c�I)L!�8Ms}9�2�E����R�B���V��p2u��y+Mq1w����Q�ty�.�.��L�<���L�<��=q���c��m����6���fx�;�����jV+RV����:�S��HKtk���4�}z�gN>��F�h�8���GP�J�����b���+ab��#�>��s���$$P�(�����)����c���W���e���,^N������3�rd���&�������-%���g�7��}�vpp�u

�K��b���� ���p�<my�����
?�l��k��6m2�>-Pfq��q{Yq�s��=�n���Lp�`�������z���Ld&b�i��dZq���s<�8'�O�O�?t��@�%�����>�}p1wa��	}�A���v�z=$	���[�7��TYJJQ
���\/����L��x�q���P��H�02g$a�X��b���1�����W�GNX[&n�,��a^���!����
 $4Vj��u�@�nt.�{�N>��D�6Q�(�u���b�&������?/��I��|r��y��PW�U����"J��[����,rJs�.�&�L��,�$�4���*��@��%���ex�z2��@�)}'����+==Q�
�6��YY�r�����������E��b���12RL����b�cKK1����kwWO3 N�7��^$�����O��O$@��3�e	��7n/�L�PQ��C<��1������Vj.d\�X�1N$��X�1�E�r�Eo��|<�c�9w�X����e�x[{�m]7f4� ���,J*J�pp����2@�A,Ja]�:>)�����=h[����Y\Zy��g*1�0�g��t����=��]�M��7��/~.d^����`��=Z����<�\���+�J%�w��������FII	�����������IMM�C�����M�f���������M@@}���3S�����c�(**"00��o���N�8�R��g�����q�]���/��[�'W�h>�q�oN@��a��h���8��������r**+PW��7�AblK#�4����[c[����a�[c[����3���������`BC��iR����gy9����o�41�Lx��~F���&N���~Z��T��$�=$
�i�`�@���i�}����|�x�
�}b��NU~�_~E����t1N��i1V1<���1���Y�.���~Ld&���H��\������l����w?O�6A����&��Ep~y>'���D�	�''<5sz��b��`�[H[���4�Js�f�VK@���7<:z������-Pp-����i�3)�(���!^��(�@�Q�QO#\�������e}~�����6���!tth��_��?��������$�����K,���@�K*���K��-Z�I�	'�kH�L2I!���GN�L�u��A>�`����I���-���(�K�. ����
����d2����S������������%6L������#���h������\.|}}�!C�&&&��Q��J����O?	���B�N���
r�\x��'kk����T*�w�.���W��d��y�u������A�+�K���kT��/_��Z�(����vW�`1h�x�B��� ���`����#o
f#?������ �A(����Z?J������!���p��]��9SV�����bA8|X�.�	��Mlla�Ax�]A��[���>���J�RX�J�������Y�(?sl��
vK����_�&m�$���-��������}!:*�i�G��9SXy���R�R�}��	�.�V��z�0�`����������"	.3]�g�e������cBjQ�]o������3�����3�g�.+���ah��>>,Ka��]��K�%����Bg������ ��)d
=�����B'��*�
� B�0S�)��pM����
a��]�.�(�(�	~�������l�[�N
'��9���������f?N����?�������&))���
�������^;G�|�I���M����#--��;�����p�B���y���x������o���������g�y���,^x��y��~[,v��Iz����q�5j������[�X���3g�c�����q�t������5
+n���&���������������n��w�,��)*p��W���x�2�?��'�����m4w�X��^LE3{����A,fq��8�F���������E����8�����x1x7��F+������3�=�3�c��R�]�GvI6#�G�`���D�@�@�~�I�z{Uc����q2u����{
&��P+�b'<�{RL13W�Dj"E0P�U��[3�X�(���K<�<�\[�\�-����x��k���$�H;�C��s<��|�<�s<G&�D
�v0�����f��1�`�?�3�X�r�s�������O{��3?�����uh�'A���Ct�����J�V�w�}ccc&N����k5���������WU:����q���n�:.\��}���������[|||:t(�����g�a��(
�{�9M��={���k5j�7o�������k��3WWW��[��Z�RU����J�[A8v�[�����!&'��QK���Cx�����(������������(�H�^�A�}��$PB	kY�d��|*�8����d�����=�%��3��bY���k�@�@��]9���I=��Q�]���~��R��{��x=�u%�/�+\����L2�!�8
p"�T���5��G�h�L*���[[M�����r����9���]��	��y�a?����
u
��Z��G����H
�w��zL���o�v����?Dt�� ����r��y�%,�f0�iL��o%J*���Q�@V�}�Y�:(8��Z��t��hqp��F$/N�8���~Kxx8��m�u��s�����]��z�jJKK9w�fff��)@�k��|����xxx`kk[�O�n�8x������?�����$	����q�kD��w!���cv��6+�J������������ ����;;��8aL5����WbN�������">~�M�NLS�����fz�K�M�}�6�8����rss�������E4�x2�3����F���[.�+dV^Vx�d��Q��=Yu����l~<�#+���r�ejm�����==T=0�c�oL���xo�{��� ���jOy�g>x�����M�':����h�Q�+��s��[�n�����_�n{���]����{��0�c ���K��GKuG�X�_�;�@:�l`O�4���<�F6�����+Y���?�G>������-�����l*:����	��PZZ��3x����8�������N#ammMee%������`cS7����5���(���de�A�
������~/h�.�6G��*�U���^����K�S����c��2�L�p��g<���Y
�q�$Hx����u�R�F��Z��f/))������� &�^��y�i���K�m�"���
(����Or����X7� ����"�����0�3XyZir1��T$K������+���s�g�^���mm��������/�� �,�5o�!*3���"R#�y<�y�xZy�����LoW����o?�ns�ea�5���J�g��h�Q�%�p�a�)��������tk�M#�"�"�qy��-c��e$$����Xvb�m}���}��;���=|G����]�����	@_|�!���%�`�q
s���l���;���	��<��F>�c�s�!!����iU�W|�Lf���Z��|��W
���zGGGf��}���j5u��n�)�JT*�[����R���#��P�T�~���F�����$""���WcW#K�R�$33����[5m�P(HLLD.����������q7���n����J����b���=�;*T(�
Tf*�o!�P�r�=}4���������Zm�A��G�-��������ys�.�IOD��������[��(3��������RJ)��be1����2PajcJy(PP�."�(���dYTVm��J�-�f�0�D�z������������\.~����I�	%%%H�R�cPQ��������/VO��3��,=z9�����}�������3"BLy�y����#��� IDAT�Q����M���g+�RY������i��.�)�i�}�o����W�x>h��,�����G�Y})�,!n_Q;�8�� Fx��s�'.�]H�O���K�%AnAts�F�U"�@9V�V���i�N>��S��Vt��V�5����k�5V�VuTzz:III����t�'g��=��9���K�[���p��<��Y����s^�Wc����
�E���Nc��c69e9������;yq����j��.S��;g3����Y��$�D��j�����V����[�p�#)���+Q�8�g�~R�\�s���z�i�G�a�1_�5���9�p�O�>e��v$H�Nw6Tn 6.V��U�|����j��m����V�����8:�
u�6�:x��A��YCdd�&��f���(..F�T�g����Xk�����KN999`dd���9�.�M������UX�~���U�P�Vs��Yrss���z�����R"""���������F���������X[[�],��Q�G|���gR��`�]�N]K�_�
@d�E���O<N�e�����_��}�w�������� �Vgp(�K�1�3���DY��`�Xa�!��T�`gD�>��n�9�B��+nt�����
���R���p���������$�,�s��ajjJ@@q�������>���W��{�Z�v}��)$	$�%R�ZAu|����������2����Q�Z��	�sgqq�����M��Z�syy9�3��z�0���\��O&O��~��]�&c�!�E��v��\z����F�7�7�m���h:��H'�N\�|���adO�Fj'����D��� �I�PW��cG���Gg��t����V���o����L�$2-�#�R�%��������������J����p�q�|�|�'�����]�,�d��$��yM�d^^{��Ai����l�m�=_#�Lg��h~��7�J�r��)�;��v��c�bF{��u�_\\L�A����^���T�TF
��mK�[*'8Ai��o�=�g�3���b�8�1&(&p��E�]�y?�}�r�CbN�"""8p�
*��u�3������c?��@�p���+Z���g(P�����Nlc������mpp����%�i�=44�A���zO�D�#�G;888��s��h�"M�c�=��m�055�����SQQ���9��/������`._�L��m5��������IOO�����'�$;;+++M���z���H��;��%KX�x1������C=Dee%{��a��9���o����{�.]������~j��9s&|��]>K�~�R�p���k������������S���oX��-OnAO����%F��8f�\Ur�X������&���r�����<��dbx����@����U���������TxTp�b�o1"b];�%TN����?��^�>��-�i�>��5R���AkW�n���c�4��� A��~l4;DBD������W9��i�@Yn����JT�#T
j3����}�
5�qI�FMg}�>2#�D =3� �	��e�E�EuV�gz��ck��TRP^�����;WQ���`�A�_��������H�Gx��#tq�@���:w%$0�6{�3$�I�m��+�T�H����?2d$�L,�8��/����("�H\q�O�r�V|���8r�s�RJ���Ed��v��U��WG]��C���8����>��7n*����?�k�.�m����;�������y�fM��Z��-[=Z��6lzzzl��U�������;wj\�=�o��&{���<.++�����l�2F�����	��,���p��Y�����O�J��L�w���?�������������"q�i���C�-m�}��$((�_��������o�F�}�}[���q#(0f|�#�E2S�!�Xb��e���bH'GiE+�q&{j�x��0�c��o�1��6#����`/��������3i�z��45(�/�Z5�%RQT��LIee%�������aC��;�W,���p�������7������~���@LT��w#��T�)6]�����Q��o3��'�F��<�RnyO*�8Wm���j�3��O�Zm^x��w�Nt�e{��M���'7o���k�q����c|����8;���Y�1{��E.�X����E�X�`j�ooo�l�BNN,���9s�0w�\rssqrrb��5��r^y�:v������5kW�^�������///�~�i
��a��<y2s��A&���_��O����q�����?J���D�r��n#xY0�m����gv>��>sik#
���$��hv~~����	&��J����g�����������_�`eu��}��K�u�\��7>��P�1�_���S�.p��\�a��F�
���FM	%X`�Fc�V��-���2y
)b�9��_n����z�aM�]������|��u����.�v��������C�=Fn)�"��_|����Ui��
%�K8��$�������\�7�'�#����I��������S�*GU�b��M+�������T"EO�G�S���cob���&������v�v�2�����y���(�������-N���i���1�����{+��o_��������k��a������s��1Z�n�������U+�l�B^^=z�`����\��V����>c��������o_�y����wH$�m����������d���,X��N��{EKw���]��u{�+{�mg��4�JqE1sz��<s�o�(��$	������G�m��$��%�h��T(AYf��p%%��db����P�R�� ����n�D�O>e�QJ)��SZ�RH1��RJ1������6���U^E�L0�����}����Y�����ZNCmH ��H��;�Ci�d%
d�M\��8�������s��scY�Z�q�/M��s���Z�������y��_�&�k(��0mF����b��%,Z���u�+~6~l����T2�3�,�$�(���,�S�I/N��e�db$3�����@4��`����-�&��Xg�[��5�E���5i?7����3!g�<��,��������=��	��Y6;v�-g�f����3�}���/��2/��r�}�?~��211��?��?��q7��Z���_&��xf���`���$�>�6�f��[
���tg
���F�����Nqp�����+V��'?SXx��l������7�-aY���7�-�%H(��2��\�]�"��D4�$��+�t��h�/���=�hW�fiMlll��n:��A6�D����,g9�9�)�[L0A��S�"�T�p��(�����@�lic9���U���X�Wj��KQ)o���n�H$����N�l������3��a���o

155�Df���7�����)��
*H*O�z�u�K�I(J �$�o�� �@��[���'^$z|7�;�6�8�:��S�/����2��������#��4.�+��d��SQ+�����<��1�^����,^��*�vb��u�K ��~��M�xy�3s��Z��%K��q�!}����e����_��A�i�8��Ft��j�DIqD�e.��9�C����vu�����=�K��u���1��\��J��_|iG;F0�v��
7$H��@2��|�v��|�w\�2��@���������==G��#�l���^>�3�(��b���(
.�~����F{VJ)
��Wb.M���
*4���u�g���E����q��3�G���x�R��[��F��2��!F��BE	%�b�L1�7��&z�a��H���Z��dY2Y�Yx��%j��a�)���O>
�P�)�b����\���X`�!>=�s)R��9��1�:�U��M�
fYf���Q\��FjQ*�f��������V����������[�B��:4��������G�

����
��q�)N/F�T�}�vz���^.
�������z��5mo/��u�����c�����b�{PP[�n�� w�@7��i��kRI%	$p�Klf3?��U�3���	R���Fz`�1�b���c�%��c�9h�k���J��N!�4���=c�5"/�&1	_|q�����{�Tm7P�"��0\�J.p�"����n���t�8�[)�M4����"
=�p�����;�(��<�{�����q+n�fy���O�}�o���y����.)����S'����iB�Y�<����.h�SIe;��Nu�Nc���?oE%�P�9�����/C����-�!���O������k����� !E�B�U>����g��O6�b�T�h�����B����H�`K����I-J������r��NL=O+O���0�W�H/Z����4V�lZ�EE��
( �8���#�
*j��-�l�����h�i`�<�i`�U�������z9�R�$,)���bz����H\�[����>$�J���/"����U�*���(U����/���$�'���N,�O+��F�u���qFY�����s��b�����4i�u��dd��)���a�q�����lB�a31���bW���M6�$jn�a
QD1�	�PQJ)r�a��M*�5j��C��B
Q���.*PE�m��V����m����hJ�\r��8�y���h��#c�0�G53�#A?�a�5VXa���x�X�K�sc����&9r:�_|k]�L�J(g8C(�w�4��b6S�S���V<�x2��H
o?S�\i{nc{����:���~�!
B#��`�0�A�[�����s�������Z��������������K�����.���H^��5����x=���_����������K������9��$a�����%��
�4����cnh��?gs�����
A@ >/���x���=�����������TY��������.���qss����f�j��[7"##5����Z�������?�����j�����J*�Y?��}�D&��q��z������U[�p�KTRIY�g=&���������le+�U[yX`�V�`��7DK,��B#r7��b��.s��\�%�	d�����s7gf�����]����o�`�����|9F9P���Q���.<Q�\�"���/�B!x������i�)�*a��!�1���O0��6o�����hX�L���3t6�Le$�Hr��T�%��u��H"�k�������js���g����r����\���� ��|:�4kYC�U����,{f~������������FLd&�����}>���������y�����@B~�~���Dz���'C>``���������E������\��@7g���������h�n+�Z��k.���(���=48���Y��U�
�In���
!XS��;�TV�AS�`3���]�j�z
ahn����9��iV�Y���F�a��
_���+J�J�-������mT�U�lf�B�)N���,f1g9KW���<>��!�Q%�@K�>|XS}������"�(��
*4e1k����SL%�(QRI%��G�Pa����'����>�_c�Q������&=�:�N>��o��]��\U^������}�S�,%hc/�z�E���+��$hCs#��{��la�hJ��b��
u�d�d�������O�}J�Z���79�'���'�\z�_�OJa
^V�����{�]�:���\�-�3~��D-cY��$�8Km1�����V�	������vt�;=�2i��c��(d�u��mh�|����N�'#�;r���s#��}x�w�����/�6��pm!���$���J��J*Q���c�a�!��c�����R,�Y���@=�����Y���4��0�5���s�;t��%��??�9���� � ��w�Xf�DO���!
Kf�����TeL�6
�T��������}������eVT�_��9����)�a���,�k#��ph����������=���������:����g���nB��2a����k���B��u<�<p6���U��g<��
@��P~�g}������'�!��p�i�3>�q��	�)XZc&m9�����TI�����b.�.�����[�K�/�����T+��w>=]z�i�&<q���gK2�+�&�'0|�p�
z���:1�o��fDh5��9��4���~�_���cK��z�f���T�P��<������)H* -"���4R�S)H, �j��--"���[����8�D ��Z���/�xa�����D��y��&�?��p0kZ8�������2|�Ot3�:�	�SN���/Y7n��Z�^o�J��G7?��=_��?^f�����J8DjQ*����>p43��C(Q�0~�x~x���Y���=�c�
������Ow���[&n���k��n$�����x$��;���Jr����E���"�P��P)�$����9��y���������~��N��u�,Z�Hc����{�����!b��O>��~OG@'@��x����.������[&����_�TY��_��d������<a3�J�G���>�r���e"��������yu��X�-Xzl);L$�-�'�������F�*�������Jr�rX|d�f/t�������
���$%%��������t�9+w������5���r&�}o�#40������P�{%���,������IVt919���`�k�S��|������#fl����B�������s���$�x��:�;;;�������133c��M:��h=v7##���$�u���OG]t��~�������R���K0�#�H�>i;�E�����m��LO� |�h�z�������3��_��I�0�bed�\nC>���d�?�5��b��i��3��~�L��6�r��MVV���:���j7}�>����_/�v�Z�V�(
����yYE�g�l���v���I��=���C���_�EEE����d4}��%|E8+�����6����v��,6h� BBB4��f�";;;���O\\\��3�����]"��Ls$�V*�\�|��ACNY~���y�f���4��q���4��W�bkk��E���:Gvv6�����i?��NDD����8Hm�w5�=��@Y�$%<e��LJ��S���� 
>_;,[[j�7�
S6�����s�W�r�8��iN{�����s�����((( ;;�?m��%��q��dpcy����7���K�-���3��J��.�N��_����T�svL6'��`���1����-���m��
�.�y�^e0�1������Z/"WE�a��;�4/����,����{�N�h�;�-5���������9
�;v�SN�������W���oUm�&m�r` �j�lt���o�����>��f�fDm������z���7������t{�Q��������1��S�����l���N��:E��mE���<
�Gj�k��Q����/�a�Z��L4���4`�p��@W�����^�+�C�����@J�����se��|x��K������M�CO�n&��$�����j �A��x.n���U�q��29��8e9ex���K��8�A��a��3���'�c./n�HAb��s
�@���T*+��ju"qA-p��Z��������>�(
�����v�W��7���6@I~>m�\@XU]"��?_f��Cv��i��|�����(G�U���`	(���������h�=��IJJ�����{$�v5�v����?����F���I���3��a���Do��k��9��)���fi�9��1�6F�3F��*S�~�zg2��1\��2��]���
C>��H$��+�$p���$O��m�����\?y��q�=�����"���� <��a:��N>��p�/n����@~~>�:vd���<^u_��]H/���+P�=o��|�������}g������0Q���f8���8


�a��m���HL��\m:&22�~��b����_��~��������1y�d&m�DjD*_y}��E�(��n�I�� IDAT���$J �P�m������o��>y�d\��bbo�K�������������RT
�&� �����[X;���%C	�^7O���F�~\{������	���]�!�[Zb�q#E�&q�������<��3��P���74)�9��Q@$�)�	p��rH>���o�J3��{���0����;�����Q�n���������[����;;;2���L�7���<����=��hq��4VZyY�i�7��!@��+J-"fW&v��g���^<Xs;��pZh�]��������t����c��tt1� -��x8���[?~<������o@d1�'}�/��y�{�C@���{g�nT��(�
������Z>��y��������^���E�u�N����Nb`MF|=KwK�]��p����H+��2�2����:ssM{��d�2�����K�����J�?�w|�����:��y��<b\�b|�3@(p0

��M��<q"�/�R�'o-������Y�Y�?k�;xxq�z/�������@4�Lk�s��C�mp���=�{���&y�y���Q�ZD��B�$jn���c�l����N����<�0s2���S��1�L*�����M��nc]��.�;�:�&��0��J�N�Jy^9Y���(���������G'uh�u�)[�#
>�Y�`�W`��.��c�R��b�F��T��5���!���e����n0x�Wu{.�3D3��7�'�bQ�:�m���8:�x�:��$P�T@ar!�g������?����a`�7�J3K3-����Em��h�4������-���V��w[��k�mjj�hY�ii����(�
��o���������r>������33g���|��d���%����R���������'�Oj�j�t�5�>�;��wwh�-��ZmE�k��`�0�fpc�����h<>�>��`4�0{��}�����5o\�����='����i�M������:lu6j�kp�v��n���k����zj�k�����il�OT4G
@��g��+������w|��n#++�[n����ux����W^g�����	�����S/T����SM��K%9X
9X����Nu���������69���k�O1i��o������-�������9)f����3�c��o��uTW����$��D��(z����sC����m��}xw.��p�v����w������C/-������DEN_<���c�0�tw���;����Z������{Z?�#I����,��n'33�����Y�����y���b��3����8bO���t{K)�Y�j�Y��t����%##���.�/<��s8��s��goD�4��g�_7P|�����u�i>0t��7��k]7��CF��{�F���������Nrr[���y�������04���1��zH|8����3i�$�gdp
�<�7�VCR��t��*g��5�6�vss#����w/�<p	��Z��W�O��l%�w�$�<2���k
�>|8������{�:t(��Z�~~|�o$�����H
X
I�#)`5���_K�
��7�f���Xr�B��r�KWqAD[_/_L��`�Nf�S���;�f�"����G�@M
��l�
\��sj.�a�00`�8�	7'��5���e�\�6������r����(�����*++Y�f
V���?Zo����:`?�b���kYB!:��
�FF�������D��(��6�&~J��b�R8��.���>��DJJ
7�t��
�����~�!����xzz��i������������9�#�0{0��!���>cY0h��^MENV������i�����c�����,YrF�����������z�2R���>�t<<<X�t)&������E�@;p�]�?��
��h�kQ�#g�!g����V�L�n����>{83��������������e|�����1C/?��)U�U��7�'�x������z�Z����WQV%m�E���>��@\R{O7���z+����������D=���q�
)`	,����BC���L�I���_?|||�z�����~m4�>r)�>�-������+/~�"��Mc���z�����
6��{��>��iTdWP|����#����<��a���l�/���������7�z����v�|�p����M����>��Cq�i8*�f��������#>�v1�O�����#�\�[�Bt,��|���H�4�T�T<�z�����>�����;6l 66��������r�H�����MF���S��������LLR���5|�p6n���udP8]

G�a4�v��h�����f6��$��v�Bq
K���		��'��2>���VJ�������
�O����
777�	 ��@����c���^�^z�����z{%�p��s���l��c�~�������b��vB��l��{�.��7��n^X[��b�����c���?��	�
h�����j�)�F�g����?f��
�F����$�I
X
I�#)`5\-�7.���I��������������t��G�|;������r*��$gF
@��S��X�5>����M�7�VCR��H
X
WM����s��a����y���8|�0��O�n�����]���vs+�T�)]P{N��������M����������������M��k�,`5�,`u$���������������7�W�n�"P�v�������,I�x��=<<x����;��A�VC��QG�V���|�JJJ�?>7n��z����;�>�$%%q�
7�k�����b�`�XZi��t�B!�i�u�V��������3���[���~����B!~�owR�u,R������$��������1�'N�`���m=�NA
@��S�������:�V����U+--����m=�NA�tA�9���Y�j�Y��H
X
W?��H
�y�tA�9��I
X
)�����-�,�vYB!��d�B!��d�tA�VGR�jH
XI�!)`5$�<R� I�#)`5$��������v���������}jkk���<�>��a�Z1�L���n�c�����8����:�VCR��H
X
I�!)`�i�3�����o�/������r���>BBB���!$$�Gy���:�Oee%S�N%$$oooz��������<����r�-b6�9��s���O�deeq���b�X0��\x��l����OZZ#F����___���;Z��}$�N\\�$�		�T�"�V#::Z~T�b��nN��
���lF������������I�����e����;�p��Q^y��|�M������1c6l`���:t�{�������/��R�s�m������U�HOOg����|������fc�������~�zRSS2dc������@C�9f������e�v��{���3���B��B!��������k������j-,,L{��g�����bcc�8�O�4I�����i������F����v�3b�m�����iZJJ�hk��q�����%&&j��i_~��h��������ZXX�6k�,M�4m��E��`�����>���������+����
��z����'�B�7l�0m��a�_�������������j�~����C������n0����l���f��k�q�3f�6o���n��o���������I�o���������.��B�~�����#����t��U�������C�>�&)`u$��������VCR����
�_+�N'--�O?��)S�
�,��@XX�C���������'33����&����233�4���"""��^xx�����Addd�>zg��:�VCR��H
X
I�!)`���)���t���:����O�4$����0�����+**���j�b~����n����W�PYY	@UU����������$�������:�VCR�jH
�y�����q�F.��R.��r>��C��������M�����������M����777���~�OYY���LK��N]]/�����DEE�_X,=z��?//��_~�>���yNn?9������v???,X�n��*�k��uH��x\�}���j<���l�2��]�n��*�'����xT��������[�l���S��v$<<�{�����g7���O?e�����=�������3i�$
	

��_}�U�|�I���y���y��'���t�)|������9v��=�K�,i�45y�dRSS��c�'Of���l������Q����`��������#�<�P06�B�F,�>�d������h4:��)��.��.��.������c����~�'&&������QJy��,4��4M���o5///m����>���#�����t�R����G����Q�M�69�����[o�U�4M[�r�h����lZTT���h��i����<<<����OEE�f6��������%,�B�F�6\QQAqq1�����vjjj��6�
����w�����������4M�{��\s�5��=���,4M#))����s���
�r]t�E<��c`��Y�p!)))�w�}@C"866�|���r�����?�I~~>3f���[o�b���CQ]]M]]�g��`0��g��:�VCR��H
X
I�!)`�iw��7�LPPAAA��/���]���x��w�����o��6����h,�������b�������d�***�����3g�����C�`2�X�b ((��y-ZD�����@����Y�~=�����E������M;�������VGR�jH
X
I;O�K����TUU5{_��]���:t�W���4\[��?���N~~>}��q�����giii��o_����\p�����w�^������_�p��a����`��=��v���w�3�U��:�VCR��H
X
I�!)`�i�!�� ""�i��������P�B����<�����B!�ZR
!�Bt2R� I�#)`5$��������v)]�������VGR�jH
X
I;O�K��')`u$��������VCR��#�:�,`�����K�ju��CpI'�]-ZO�~�B=YB!��d�B!��d�tA�VGR�jH
XI�!)`5$�<R� I�#)`5$��������v	�� I�#)`5$��������v)]������RT�#)`5$�����G���B!:)�B!:)]�������VGR�jH
X
I;��.HR��H
X
I�#)`5$�����GB .HR��H
X
I�#)`5$�����G
@$)`u$����H
X
I�!)`��%`!�B�NF
@!�B�NF
@$)`u$��������VCR��#���:�VCR��H
X
I�!)`�����:�VCR��H
X
I�!)`���I
XI�!E�:�VCR�jH
�yd	X!����P!�����I
XI�!)`u$��������H��$�������:�VCR�jH
�y$��$�������:�VCR�jH
�y�tA�VGR�jHQ��������vYB!��d�B!��d�tA�VGR�jH
XI�!)`5$�<-*�v;������h�4u�TF����U�O�I�#)`5$��������v�����x��'())��;���/� ,,��3g�v�Z�������8p ���m=�C�~��z.IR�j��������������(�h�"-ZD�=���d����[��+�����{��K�2z�h�c-$)`u$��������VCR�������.��"��[Gpp0W\q�z������F(�B!ZU�
��� rss���G��������<|}}��P!�B���#G�d��)�y���Z��)S�p���|�M.��R�c��������VGR�jH
X
I;O�
�_|�/��}����k�q�UW��;��=���t�����:�VCR��H
X
I�!)`�iQ$((���^�q�fsk�I�%I�#g�!g�#)`5�,`5$�<->����,Y�������JBCC����������W9F�;I
XI�!E�:�VCR�jH
�yZ��x�b����;w.?��#���/�`��i������?��B!�h%-*�z�)&N�H^^iii���������A�n���?��z�B!�����<v���_�,�v���x���,%�g�=��5MS��Y$��������VCR����k{��IQQQ��������g�1|��W��P��s'��o������p���ZrrrX�z������^�zq��W�����;0�t���Q�FQ]]�[o��������'b��x��w���2e
&�I��9x� �E��ke���dgg��o������s'c��m�a�����V�\��S�_|q[�����������?��O?�%�\���/����[��?���K�����K
����M�6q���b2�X�b����O�>L�>]���G��o_***�������1�,^��#G�@`` S�Nux���"""�4i���������:�VCR�jH
�yZT��;�;v�3J&����:|}}qn�PHNNo����
��<�������O���$>>��o�������=�W�&''�������[��������O�~��<y2�����j�*6m�Dyy9 11����fc��e|������q�%�0i�$�3Q��^R�%%%��C\\G��O�>z��{�Lxx8iii��������e�^�8z�(�{�������jjj����������������j���hT��$����H
X
��SCR����p��I�3�EO���}V��������(**�h46)���+���:F�I�~�����y����������>����D��G�=X�d	������};����<��c�v�mDEE1o�<�����m������7�x��'�/���E����o���`����Z���'����O<��+X�jnnng�ytTAAAQ^^���/G��f�����v�l��w�@EE�Ca��������������������z222�2e
5559r___��������;�������U!��P���w�VUU��}iii���;[�4-�d��K�.�������0��g�i��W�^�����6�M����6a�M�4���V

�z�!�Ouu���w�}��iZQQ��������z���b�b�h�=���i�v��A�`0h�-��dffj�����4M�����^�/��R��s�N
�V�^���7<<\{���Z��tH�����~�m��>����Km��5�}�V�\����{����{�nm���M���/��6o������>�v�����@!�p�a��i��
S�:-J�7�C�5{��u�x��g[� ���7)))\y������~8@bb��f0����Y�z5V��������|�>^^^�|��|���|��������?�A����c�>IIIxyy1a��O�n�>|�����>#22�a�����w�yzgk/)`M����+S�N���o�h4���iii���K�L^^�~;//���JJJ�=<<�4���"
�v���<=,)`5$��������v��.��;��{�����#�<�d]�j��y�fF��j:�z�������:2h(kjj8|����w��M�?~���R�����b!22�I�+V����B���{�f��O�^�0M�4����S
��������L&<�������K/�ow������-[0�������u�]�l�2���
�������HLL������$���
���IOO���.S�~$��������VCR��s�p������PVV��no�g��Q��;W���Q\\4��,((h���������)))i����� ��������>�����RRR�l1Ljj�����^R�nnn$&&����� IDAT����i$&&��M\r�%�h��t���X�V&O���d"""��'r��A������;���#  �	&p��aL&����M-)`5$��������v���W_}5W_}5���<��3t���I��u���S���S777����-V[���A���9���zV�ZE^^��ld]]���xzz2`�}����
v������C�����pK��l?���<�%�\�l��������������vi�=�hW�q���������a{gGlo\ml/�i��'�|R��������t�����k�}��vQ���f�N�����@�?((���*jjj��qss#00����f7�.((�����D```��i|�����i�B!D��<fr�K<xP���$���s�j~~~Z}}��y�f
��m���g���Z\\��i��|�r���M���t�3n�8m��!��i�o��yyyieee}���7N�4M{������0�����O\\�6s���|���B!D���p{G�������/������p��7c4���K�������WVV�������[9r$~~~}rrr���/�>7�pv����}���������0a���|��'z��7r��!������+���VGR�jH
X
I;O�6�v����o���hH-[�LO����K�������s��W3b���
�����(F��7�|�q�����C�������	

b��Y�������r�]w���NTT�}�}��������+�g����b��mX,�/_��1c����������~Y�v-,]��)S�p���;���W
��H
X
I�#)`5$��������FDDKll,>� c���o7��0d�v������)((`��	����.]���3v�X�m�FBB���L�6��[�:le3u�T���;�t�Bii)�f�b���'I<���Y���������g�}�p��k���G}���;v��y�����o;��j^{I��SO=E~~~[�E(��111���83�V�_�~M�gOR����i�w����i�Zu3��f��YL�>�=z��P�B�65|�p��r2����B!�PK
@!�B�N���@�������o�m�!���z���:����'���6������.O���};���r`+������DB 
$%%ID�={� !�Vv��	233%�2��S��U��]������v;f�___s82�=9x� ���m=����OFFF[�%��������222:Lx�#iL�d�����3n��5�;�����p���Gjj*���			$$$��[7��u[����������r��v)]����~pG�����A�4h%%%����j�*���9�������~_[���k��wURT�s���������a�6����]
`��!2���|RRRX�p!���$$$p����OOO'99��':q�B!D� �hs����]�

������+�$++�}���`����8��s9���e)F!��$��:�Y�3f�h���������=��z��C�����o�����III�j�*���������������<R����n
F����xn��&x���o/��2��-�����8�FR�jH
XI�!)`5$�<����{
��yzz���Mzz:������3g�����8�����z���8�����9DS�VGR�jH
X
I;��.�����F||<�<��~;==����3x�`233���!++��_~���h������$**
w���38t�DFF�x��������VCR��#�pY����Xbcc����b� LMM%77�����Pf��B�:)E�H`` ����j����CNN�f�����������xyy����B��#�j�g�'��p���R�;��c���a��������c��S��K�f����R�,`u�,`5�,`5�,`����)����@[,�=�\F�EBB��s������RSSCZZ��-c���|���9r�����QYYy��w�VGR�jH
X
I;������V���/w����o3`�������"33�������W��K�u�Ftt4111���FEE�����GQ�:I�#)`5$�����G
@�S��������~~~$$$���@ee%��'33��7���GXX������g�����:�VC�~����H(D+0�����4,�;v���L�n�������qtt4��s���m<j!����B�]�t����7��L&�-h���9~�8G�%99��>���B��]			�n�S^^�?�`0H�D!�2R� I������k�N���NLL111TVV2o�<***HMM�f�a��x���pww������n��}��:�VCR�jH
�y$��$�Nk�l6�y��G�5kO<�<����L�<��/�������OIOO�f�����/I�#)`5$�����Gf]����8p���B���W]u�������u�V>��bbb����W�^-Z���5C!)`u$��������H��$�N\\�S^'00�A�1h� ���9r�`��������$����j�v�PR��H
X�����#���H(D3������������;v���tV�X��f#..���x������D!����B�1�������`pX*���'==���7�b�
�w�NBB�z�R<j!���.HR��l�����h����z(�����!C��m���|��7�Z�
����C�8p�>>>�F����.��$������H��v$�vI� I��)`U"""���/�]v�G�&  ��Daa!{��%==�~����������n?���3gN+�\R�*I
X
I�!)`��@$)`uT���V��p���\���0`��QPP@vv6999���O����3��u#**�������>vI�#)`5$�����G
@$)`u��nmnnn�rq�~��Z�������������eeeegw���)'�����H
X
Y�UCR��#�����G�Y���
������&99��+Wb4����g
U����j��7~!�p)�pQg�D���K||<���h����sIOO'==]�w�u�<���������������B8��.HR������������<��C��9s�9s&����������G}DYYAAA���Fhhh������(**:����$��������H��S�R���b�X:D�R���			z[mm-EEE��������b��	

%**J\dd$�c���)//o����v��)������1,�zR� I���S�m������H"##��	4|C������ddd�m�6������%((��� ������"''G����\�
$��������H��$�NGM���T_��=����4���}�������z,X��d����=z0z�hL&��vrr2�z�:�Q{�NR�j����v)�������>}���O�vrr2���>������������_�b��'�#""Z6INN&**�S�B�)�����~~~z
�n�SXX������^��aLL�S�*ld�Z)--�K���.HR��t�pG�k)`���d���Z�?Nff&)))�^���^�QQQ�����VPP@RR��OW���MR�jH
X
I;��.HR��H
X���:���Z�������Xbcc���
9v��v���~�����Luu5k����������'����O��iW$��������H��$�����4hUUUg�X777BBB		�f�Q^^��f����}��Q]]Mee%�������m�6�f3�E_r��������M���I
X
I�!)`�q��p��������UT��x=q�c��%22Ro���TTTPZZ��?XZZJnn�C��d����������l6c�Z����������������L|||Z�I�!3jH
�y�Bt*����O�������r�;�w�}��f�n�S__��j�������www,#F� ,,��x���%3�B��P!�����������h999$%%q�]wQXX���c����������k<�$""�������T�B4��I
XI����III	}��m��0`�����	���������H���'55�'N`2��3��T��-^��U�n�$�����G
@$)`u$�F~~>���J
@g���"::��0�4������������#l�����2�f3~~~TUUQPP@QQ���xyy+K���������H��$��������!44����"g�pss#00���@z��4� =z���jjjj(..�@����j�`0`�X�"���W����;{y�������N���������t�����;v���KLL^x!F���Ouu5�w�&''���S�***��k��������<KKKINN����.��n��5�SPP��]��Z�����!u�l�VGR�jt��:$$�q���������a�3))���(�YK��JYY������SQQAyy9�����+**����l6��ooo������.]�����G��5�$;;�S�/(3jH
�y:d���?r�-����C\\�v�"44�U�V@jj*�G������]�r�������K���`��-�p�
���������S�����5�z�j&M��s=t��=��?��>�E�1c�BBB0�Ldee���/���:��B����	>m?��Fee%eee|�����RTTD}}=����j�b41�L������ ���$((OOOg������O����;�/BtT��<y2�\r	K�,�`0P]]�E]�#�<����HLL�O�>$%%�����������y��7y���Z��~���9��������[:t(#F����n������'3u�T^}�U���X�f
�]wW_}5��'33�3f���O��O���os��ws��Ws�����G%���F�����i���3g����q��1����Jqq1EEE���~���TPP��jU�:B�XVWW������>�o�����e�]��
������s'_|���[m������Y�p!>� 7n$##��+W��"�
b���,\���n��U�VQRR��Y��o~�^{-��w.d���,]�������'}�S�L�/����/���3?@R�*I
X
U)��$&&��������Eddd���TTTPTT��G�e��]a���C������������d"44�S�����v�Wz{{��W/�m�����i{���������!C�8<��K/e�����������l��.h���7� 99��������-[�>\pf�Y��h42h� ���[����������R�I[���`IsMZZ?��#V����<*++�Z�l��	���o�������x�o��z�vk�������z`CR�jH
�y:\����s���3f�z�����[qww���_~��L�&�c��(,,$//����&�Laa!������5{�Mpp0'N�8m��������$������H)���5~Y95D����$�i���z�����_%%%���6i������
����ng��E�L&���	&v�c?q�IIIL�>��>�SI
X
I;O�,���0�L���a��0X�V


���+uuuxyy5y\c[����	��������O]]����>���z�����:t�����a��������������{�����[Q����niioY{tt�~�R{�+��<����*�3f�h������dg�<���X�VjkkY�f
��u�f�Q]]Mqq1���	�������9��������|��gs��>�������1�^����[�n��7�s���D�� ����


7n���|�������l�����?�q��a��������\O�5*,,���___����<QQ���x{{��O�����_�}
[t��f���?g�������s���KKKIJJ��o�uxi�vi�vg�������<?�����DGG��gOL&������3y�d|�A�qF�Aaa!�&%%��>�����o��7����+V���Nqq1�����#��~������>}:��4M���J�d����3���4}V`��L�0��G��w�^���:�9B����>�g����_���PO����������1c�}����<������+���9\�r��7R^^��_������u�����8/��������O�~"""�6m�>��Y~2B����������?>c��m�>������*��1���a����s���qwwg���$&&��5�g�|������rrr�L����`6�)((������f������555������F���������P��l6�[�;v���\���uJz��
>��7*}�7�x�]ff�C{VV����.��2������O��5M��O>a���\u�U��vV�Z�����g��Uz��#GRTT��M��>����[�����HMMu����5jT+������e��um���n����xD����d��=m=����������{zz��[7����ILL��G���n���������w��~�z


x��Wx�����������g�������^�������V~W�������b���/��������b�
L&���c��y@�����{���j�?��e���w/{���������[��e�,iii�]����kG������?�����^#..�={���/��?�AO�>������������{�d�<�������:�N��}����j�k����?���j~�a.��b���Zy��W�X,���k�_s�58�	&���������w����m���&��������Vg������q����)�����5K����CRR����Gff&;v� ??������V�NV^^Nyy�Y�������OSJKK�O�>���P\\��7�Hxx8^^^���b��III�w��4����aV0!!���;���<�e�]�o���{�9�
v0�tww���?��g���G%77���h���>y�����s������^��y[�n����o�������;���r��M������g���<���<������2t�P6n���2��^���{��3g�iW^y%/��"&��y�I$�������:�]w][��T��:���d7~������B^^��n���'����l66m���)S���'��QXX��������������Y�r%6���C�
��2339x� >>><???�;�<�n���n'--M�D�
�@W#�
!�:�3�g�
Lee%EEE|���ddd��������������9s���3����;���d������+�6����+(++���/�_�~,^�OOO����'X�b��w�w�F�4z����C�������{���`��-DDD���O�>}���_��3��k;��B�Rg�0����f}��QRRAAADFF��0����y�f���$**���P"##��.��JII����wh3��r�-�l6�����kW|||�����������^����������?g��I`��x��W8����S��#)�B����0���%����Ell,���z[]]�~�@NN���������ITT������JHH�SR��[���%���ZJJJ��W�\I��=9��s�bM�4�w�NFF��{.V�U�7����������RYYI]]�^�7��[__��n��)`g���Y���Y�j�Y��$%%u�HGRTT���&�I��������rrrr���&%%���|������ls&~��'�K�m�����!C���'��{�n��� $$���@:����Oee%�_~9�������e��HII	�\s
F��Q�F��g�BQQ�F���4�tA�VGR�jH
X����H~o
���???�������*^z�%������3���
7�@@@���w�%���p���.����P���njkk9z�(;w�����G}}=�A������9���j�����qMM
���R��)]��������� IDAT�V�#��;������7����i��3g�=��C*���c���OSZZ��dj6���b�����'M�vwo�T���*���MI��<<<$��H�V�uIQ���T���gW���mYc��)--���-))�7R/))���������`)..������b�'��7�����-#�B�A}��96��;v�w�^*++)--������z�9���7���$$$0t�P<<<�x,yyy���t���N
@!��E�F
�����O��K/���cdff����K/�DHH���DGG��{��������3}�t���Z�m'��I
XI�!)`u$���={h��`��@�L&�mk�v;'N��������������7��u��B���VVVR]]�_r��	233����� &&F�8:#)]�������VGR�jddd`�Z�^���`��52d��QPP@ff&���|��w�l6������&66����VM�~�������|�������o�>����p���|��Wl��ER��L
@$)`u$����������;�����{��������������"������`��TUUADD���ddd�_���	���;���`��al��������g����a�)�������,��#������������:*++1��vF����h�}}}}�����������\c�����r��a8�N~�mK�B!D�q��z/�������7�.//������<:��.))���S/�������I�&����s�=G���Y�r%�6m����B!�Y����������n������2����L6m�������?~<o�����L�8Q����tA�VGR�jH
XI��*�����g8�	&
��[�l����]�V�5d�F������������0\N~~>m=��x��h]�����0�Txx8F��U����������0����a����m��M����:�VCR��H
X���V��k���i���\���y��G1bD��8��.H�VG�VC�ju�,`5d��u����|��'z����L/�g��
��q�d@d�6&��!���N=	�9��$�F!�.�l6c6�O��3�$!T[[��u��z.i����8q����r233%��HRRR[�%����������9q���oo�at
R� I�#)`5$����������<x����)����:�VCR��H
X
I�����z���.HR��H
X
)�����V�b�`�X�z��,!�Bt2R
!�Bt2R� I�#)`5$��������v)]�������VGR�jH
X
I;��@\�������VGR�jH
X
I;��.HR��H
X
)�����VCR��#K�B!����B!����.HR��H
X
I�#)`5$�����G
@$)`u$��������VCR��#!$)`u$��������VCR��#���:�VC�ju$����������B!D'#�B!D'#���:�VCR��H
X
I�!)`���I
XI�!)`u$��������H�I
XI�!)`u$��������H��\>|Xz�m��_��	��w��<v!p)J���@�Im�p��$)`5��VGR�jH
X
I;�,��g�
��Wc���n,��S���=��}�a��)�o��[k�B!D� 3���)�7���pp�/_U4�/�Ny�q`
E��~V���B!\��� �O�������i�,����@l�D�~��������{��L'�����[����%��������VCR��#�r����8`0���/P��4y��e�<����Ni�����%��������VCR��#K�.��S�W���D���Bf��������A�n��w���x�������%�7��/�y�����4�@�H
XI�#)`5$�����G
@��)���``*PNC�������_��'hH�w�c�q@��I�_���rJ��_����������ZI�!)`5$�<v	����+� 00������������o�Z��_�B��}	

���/g��5�QUU�<@�>}����b���}����������'���\{�������������������k���?^��U�D�l�`f�����
|F�v.W�������������S��C�i?����R!�pr0--������n��9s�����������O?�4?�0K�.����'=z����>��n����c���L�2�m�����/�|���#��k�z�B�4n��&
��?���b����7#F�`��}DDDPWW��Q������7������^z�+�����T|}}���rM]���i�|��������+��i���B�D��n��Vm��A��f��>��sm�����iZAA���������<x�v�
7h��i�m�����v�]�����O��i��m��I���7�}jjj��������4M��.]���?���i|���z�7�Kxx���SO�����jjj�����U�S4��m�������p9ZJJJ[�%�Z������RRR������������m����hS��
��
��u:���fc��5$&&b0�o�c��a��Ql�������_����c���7��i�|�
���\s�5��nnnz�o�����}�����Q�F�a��OBB=z����?{g�D���_��M�5mi)ZZ��,B��EEYT�qe��U6eqaG�DA�e
h���M��M�&���6i��I��|��O2w���sg:���������;[�T7�\���}`/`��^������{WuN&$$@�����c��EXX:t��y�����p��UH$4h��j��
"''iii�z�*������^&���WAD�z�*4h`%4K�1+$$�L=K��n��������j�Ghh(��m[��pH��>�m���������\}�9���,��30m�4���������o����L��7Z����������TbbX�F�VkY/���/�F#�����j���[n�F�j�h��a�y����A��>���`/`��^�������:'�q�F��W^y���"==�����WWW����� �H,�j�Zb���4Z��nnnpssC~~~�y���,e�.Oe`�����a���,,,�Z����~�a|���M����-[�   C����t�t�R�>��*BU+������������c��{y�����x��ao$DDv?�
������/��_��#GZ�w����������?1z�h��j�7��>�o��&4
�,Y���gC��X��w�}�W�Frr2�x�
����e�r����'O"&&�=��]��#G�X���������wWz>��������_�l��!Z�h@���t��r�^N�N���Ex|����mZ>=FP?���?F��_����o7N�tN�tN�o���]�|��������������4l���z�-������T*���<���%�m�6�<C���z���<H������S�Sx���$�H(>>����)<i�$""Z�x1)
�����1{
��5����^�D���tw���5��s���>+�v���?`��w���k�]�X�'�X���|
�F�M��f� Z��H��d���EY�d�Ae��M"
�m�DD4����C�v����7���o���c��>��}`/`���	&`���8{�,����g�a��pwwG�����{w|��'�a���h���'Nt��
-Z���9sPPP��g�;�I�&^�����G}��X�~=���1a����>�\����[��d��t:�3�Z��4u��y������S�����b1bo���t�����j��~;v�'�����D`�P�E #0�(��%���>,�)B��ZL	��$�Y��n+no2q�`k���6.����}`/`��^��G���w�y�O�F�����Y3$&&"22��-��Y�f
�z���a���r�
&L���� �J�������C@@����k���������{�
6�����O?�///��u.�Vb��ux����b�
���@�Vc��������X����
�$�s��f��4}��!��H����H;��� �����5py�@#�,��&�}�Xt:���#����/��!������T*1�cS�����~�f*��l�q�D'lX�a/`��s���>��
`I��=��/�^�z���3�r��v���#G� ##�[�F�f���������C���]�v�����\>|z��:u*^S�9rD��~�j��2����	&��?��I;"
�S�YZi�����}(T~@�N�NTByQ	i�����|sK@0JM��
�\.����	 
��E����w}^������0�>�E�	�X���
�I�E��<��@�w�-.P�A]Q��/�!Vl��
k+�C��m\nl/O�>0�~��C�	�����+={�����@3�[��4������!+���}���4�����W�"����M����+��`�Bj9\!��S ����1��+��x0��k�g��@9�_�y�Cr�&
B�����M��C�����7�����o���;s��a�A���W��r�7�q;���q�����k���[���Xt��bI]"�T�i>�d(J��k!�)��	�C��2�K��
���@�{2�Bu��&f�����r�V�H�fqZR� ���h[���!~=!�w��������`��v�a�<uZ2u��`m��1��������@�z=g�Q<^,l��������3\�q��f11�����IB������k�v~	��5��
������`�r 7h�x�	�khFU��N�U��-��wYVELD��"���$�Z�����b������K�TW��|��Nm/����B����l�����i�*��!�"R��`*���h�U~-Q��(��*�~^Eu(���z���n�8��+l�sW�� ���ed����������?o��Yc�-��@4*.����?0n�h11@����Q@��K/AZP)��|�M���-���0p�p���_���t@�z@�V�������bX�E����j@
��l��:GU�ky�.'����	��m�+?�B$j zP��u�g~���B2���~�5G�pye"8������@#(�� ^V!������+�8$@x'���3��������B�SfF���
������0�}o�����/)�Ql~r��/8����1RRR�����;�tU�����
�{e�1,�b`4��#D����/k����-pK�4�DD�mf�:u��F�N��u�z��'�k���.��t�����E���"k���@.��
�g�x���*�nO��!z� zISK|��b�U��+��` ���bX<�h#��FQ��No��z�w���`�@���������B�,+�f~)��//�`*��6����OA������/)���0X�^�0{��?,��n����lz���tw��2��Q�].���l��B�t?����Se< �N�������@��
���0
�G}��g!D���1K�c+���4��	��Eu���O�;����I�	��;h5D���r���x�+g�+^�0��m���o�x)��(�������uLG�p����ss�C���\�>o�����fn��H��=���#�[��(Z�����l�-;!��2!��K��d9�E��}LE��P��8Ne�7H\���C��FC��S$���+���;g��P,�(v*�������K�7+J�c���>X25����sGT�V�N��/[�w��v;\!�q�"2���i�!z's!�0p�h	�3����n��� D�3=�c*��%n/�������0#H�)e��A.���b�,�)��8.��Pfk����D����;��
��2�W�5O��B�'yM{i70���H�^�1�@(���g!�{��t���mTT~j��S`��*�a��S�1�����o�����C�����������X��/���K�bv����+�@� z}��?�����8��CBw)���7=��i�%������ zQ[U��u��(�6��N �x�~�b��?�0A�GC���c��� ::�����t@��\�v@����+Wl^n\\����������!!!���Q��`���]<j��-������e!�0���7��
e0�.��a�B�M��i��@�,���Bx����S���� <��v�������/D;��x� �u]���M�**g�5cy����aS�)�d����!`������� Y��KG1�7:u�d{/`����W����n�2C��^��p�\�rB�����
a�XQ� ���"1�OT��"|��Q@r2��0~	�0����k��x�����Sz`��@j=@��f�B&��}n��	<�Q8���$�%��z�5�a�"�����1;Ret)Z!7V�����4;�>��8���>���s^�u���8�p��	����L����C����Z������C8=���N�k{53i(��N���=?_��}�P6~8��
H�VO����0�t�x�,J|�K�[�Q@J��(�1���1�����	{�<Do[����8{r���v���uG����'�S�L����L5]������iO���`�2�6������K=i#�y�e!���rX��X��O�����>��y���.�/-��DX�h)0p�u4��-`�;0@�z@}O1/x��B\*���D8�|1;������sy;"�Q��7S��!`��i�N��������sr�.a�Wx�<��8!����
Um|�4'��d���E�"1�*�\��P�4�!��*�L��u3���oq��Q+B��?\8����<���,]*��,�Y$%���i�9�y����;��NI�b.��!�"_0��2��"���������s���<�� .i+W���_! #B�9O�0A�t�)b��� 0P|�y������ �V�������l'����Sv��V6$11j�m�Te\�8th��e`�+WDO�"{���=;v�� _/��
`��}?����������?6�- -A}��S�?�-�v|�'P�
xs0����)�����0�����8���)���.��B���E��T���������=�; l4]t�p8QA��T�V8������n�:`��8�\�YY��H�Y}ff��@��i�x�Q1�oD�Y��j:��iz:��NO����@j*p�dd����9�0�J����<=�����e���E�l�����4(���bNa7�b��RU�M�t�L����?.��mHZZn��e{8u�����vpj5����[V?'O��������<��������a��DZ=����q�*p��x)IIrr��P1'��Y��.�t��y������)�?h�L��;<��0G����J��\�� ���>�����R��p��Ik;�[�
�~��	�o�����+�a����B� �P+ z(���CEK��u�/)�������� |>DL�F��v�]�*���W,�Z���b��h����x���%D^A�uM���������������I*�.0P,UE�+��V���unp����D������i99���z�K	!A��
h4B$��z��Olsu�����H"� IDAT�l��W�P��8������ck/�?�f�*K;��A��[Uk��sy�����<8~3�������#��0������@���9������������|}�S���wl����F�<=��m[(�z�:A���"��\�\o{�
�����C���
!b���e^y�;v�|�i�p�8PT�!1��7p�6�(��2%XSkAh�4_1��(�"<��(MqMU1!}P,"}�Y|K���J%�3v��R��������&..�!��|�-���B��B4l(��>��$4H���m�J^�g��a&�yx�]�c��$�or2���B{{!X��^���|�\�b����J�a�����E��������c�j!T�v~j&�CCE[���m|��H��>`�&��_�=����.{��2��+��^=s;�4V6�������.���x��g-Y��o�����:!�~��
�^Q�����wA�S�]j��c�Y��A��RrQ�X�!�7�K���!zM�R�=�%_��m�@��9y�z�A�{v)���'�<=mV��P*���wq��[����S�>II@BBq������m�{B����4j$z5��D���%f�����HeVDn.PXd��X���:__Q^m�1;tH�gg��KJ��������bh��]�����Y��eg�<99�_�GA��.����F\�������sgM��z(2����Cb)Mx8��ke��C�%i�N,��1��M@��P\l|����Y<k%��(�+P,���o��`����e�|��!f.1�
�6t]�4��W�)@�!i��[�[o�t5lCp�X��1����9�7���g�Q�]��X3��/�%!�*�l�;����#���z!&4��AA�3 ���J�l�%?_���������o����*l0�Z��#/2Jrq�t�'�Z;�(���aa"O~�0!0��&'�w�z�����a�����=M��%4�xi���]�11�LC����	a�;o^q��ly�C�{ ��)D�����&l]����{���>bx�o��1��*wJ-y?+7X���W������q/� �eKxzz���+R�@,=�7q�}�@$��o&?_����bG�0�t�z=5U4����J���V��h1L��
����f���bA�ha�$�4e{'�'���Ue�i4�����b���/�"8X�
D���� 9Y,.��\]E^[���z��\��������&��q
\\�HV(��T~���D�w�(�W���E�n9@���f�"��)����E�u.Rt���{g����q�T�15��`�X�����m���	�:`���9/����L���v����m`�z�w����.��5�Z��|��1���yy(<} �K�����=�:	��;�1�I<=���Ky��<1Q�_~�����S��D�jIo����f����5�r^�"�ZI�����zq,s$s$�z�&B���"�����uy��������DG���OD����~;��}��e�GZ��g�J^�U���0��J�B����!����/�����H��.Mi��Z�]����D���pw!{}��T�p�6�`��WWa�^����zl�6�����Lh4"
�����f���^^b�����6�!!b2D���_�(��Y:/�;����{�R�D����^NJ���m[��}��-�R���;q�m�K��-�����=a���z/�:L������j�.w���x�Y�9��Wn�x74m
,^l�2�B�`��6*=�=c��0�.$�b���#EZa!p��>?qBx������(�kWa�i8��~�	�������(B*�� >>S�L������z���
V���Y�{����GQ��-[b��}�Q�{��~*>�x���)��������j,�7�F
�Y�L��\�����<�����K!E-���>>���P�i�R�����"��G�����QOOa
0u��|��pvR*�����e���"=�.\��	&�d2�a�-[�gz�������������x��W�����-[b��]x��7o��-�S�0Ep �05OT�pxpf��ck�u�w��!l K�����/�_�"f�x�]�"����mk�B�.�g�hh=T�T*�s�N<��c�8q"�O�)������M����'�[���I8Oef�������D��d���!|�0~����+���!����3��V�b25@����Gtt47n�����gf�Lb�zP����Dpu����R����G'��lJ��*"s�a��B���E1+�������c:?_���P��vn����~}��2�^���E��!f*,
����$#��"4NN������KN���T��g�wYx�e�Ixy	��!CD��l���E@���bom���W|PP�@,���`�Fcq�K�����(?_�7DX)s{�N�q�kA����3U��&;[L�W���a�|��#
���Q���t@��p"..>>>,m���f���K��U����KiT�Z8���
���7��cPfg��B�,�����W>
Oe�Lx<����F��,f�c���	�$�	<

DP����g1��[�3�xV�����b��2rs�aZ�XOL6���B0�L��C�8��db���,/9������ed�W.���ik�s!�J�iedf�9������c,_^���Q���H���C���Ahh(����N�]��M��
���H����?"d�����_����s~~���;$2��Q��E��������W������#F:�s��j8�Rxy�%2����m�>�X��W^���]\������:��,N�~��.��QX: �l?��p�E����el{�gOYo�;% �8�b�7g�dx���*�M&,��+H����h7�u[PZ����Z�����J�t���a/`�a�v`���%>�~�iH�R���W�^����wpme�����$�d�a���}���6����;X�-��moF����z������]a����z��������DGG#%%����p$&&���35]
�d���Regg����+�����"p��h���F�=���AH�+��o�a� ���\��!�R�4n��lZ&S>,�0c{������]��p8������P��pHN���mA���1n�������i�xxx`��Y���PsBB������j��Dvv6�J��a�;�8 �l?��>���px/��m���a��)AAA��C��@����{����Ny�!7n\�UpH|||�c�ae�|x�a�a��`��dgg����oMW�!Y�l�W��C���o���j8$'N��*8$�~�-:T��p8��9�e���t5��HVVV�ZU��pH6n����O�t5����[�n��j8$+V���*8$[�n�_�U��p8N�>��7�t5����`@aaaMW�!�j��Ve~K�����En��������}�gl���Q(pss��j8$*�
��N��T����t5���=k�[}�t@\\\�����@�Tr�;���)�Ve�:�������@�a�a'� �0�0����1���5]
�D�VC�V�t5���Ldff�t5���=k�[}��X- 99�N��Yy			06-����#66����������[��v������uqqq��6&66���N����Z�K%DDv?
S!M�6��k�l^.A"���\�a��,`�����>������,�a�a��d�a�q2X2�0�8,�a�a���0�0N@�a�a'� �0�0����a�a��`�0�0�d�d�a�q2d�g��]��`l����q��9(�J(����N� ''g��AJJ
������Z&�F�ALL�Z-|}}!��}R��8u�


����Sp� 11�V�������EBBT*�ry�}
��;���$���C&�UW�k-Z����HIIA```�m������OC.������r�yaMnn.��=���dxyy��,�����3g���??�r����F��J�*7��������x{{��=''�N��N���y����S�N�h4�����r����TZ���r �aHNN��z�$	5j���Z�f������N>>>�P(������[g���?$�\N���$���I�&c�n2����^#�TJ~~~$��������W�V���J���K����c���~


"777���&�RI��/���y�f���%�RI���K�7o����:6n�H������EB'O��l�����W��G}�rss-yRRR����z^�9��z}M�R�`�����A��7���H����%K�X����/I�T���)
�_�><x�*������d�����<==�u����y���H&��J�"j��9���Z��/�@�����I*�RTT��q��'//��|�I@$�H�W�^���i�stX:C��V�Z��[����������M�������^�m�F����&��Dz��^|�Erss��������I"���?�HDD��L��7���B""Z�v-���������(33��u�F]�v���E��jj�����[	@�ZM4a���td4i���$������DDt��uR*�����RAA��3H�TZ�8���/��r����G��j�����m[2�LDD4y�d
		���8""�����
���S-�<���t�}����7��(&&�|||h����R��?�����-[,ik��!�DB��7�g�T*��?���F#��zz��(((�"����C��6m�D&��4
=����Y3�������4p�@j������\�i�&��d�k�.""����>}�P���-��g�}FJ���=JDD���E������R@@����DD���H4v�X{��C��AHNN&�LF+W��J�0a5m���jU�Y�r%���Viqqq��m�FDDC��N�:Y�������oDD��sgz�����������s��x��1c��c�=F�=���\�f
I�R���h��
�k��FDDs��!�j��<Z��<<<h��9�w��I�&����o����i����t���E��������5�|||�`0Pjj*�d�2���&M���p��@-e��y���n����-]�����n���E����TJ���'"!�;v�hU��'�����gQ������I�H���K/�T��������*�����?NDD�Z���#GZ���aI$JHH ��D������oZ�Y�p!)
������9�g����8qF��z��J���'������QC5���?+V��JKKK�?��={Z��������?���B�<y�L������R����;�}�v|���e�?~��7��3H�R�����f��G�N��l��J%|�A�m�`��� "�����������J��?����>���[��������g�Y�f��t�z��%���+�����c�=�l�BBB��iS�{����}�����v�{���_}�U���D����2m��[7H�R?~Z����/��j����kHMM-7�N�����mxV��KMW��
�n���W�*=  �0�����z�5�x�
t��;v ����%	��[��V���������������8�8q">��s4h�������2�+ ~@�9b���q�r�\�~������^����<�j�
���@�&M�q�F�o����P�~4�'%%!))��y��6����?�1c��o��9r$L&6l���^z	}�� ��k��e���������y�����!;;������___$%%!99&����3�D���$��_@��yL�p��`0��[���2??���T�������������7����e<W��:�:���^Qgd�������}��r����
��|�V%�3���a4�~�z�Z�
���HMMEHH�y��
�G��N��^��mg�e��0����Ftt4��7����Yf?����hDAA?*�|���C�������
���r���J��o^��@��O�V��z�233���SLBB
�B���#((�x+ussCvvv�}233���iy���c2��V����7m��������������7��geeY����j���<��B��R���a���CT*>��Ct��g�����j��5��[�YP�yQ������/����q��)4k�p��%�k�*�
�&M��~l��)d2
E��g���%P��Z��5
C�y���S��A�2C7n����BCCk�Zu�k���k��h�������3^F��t:���#""*�
*��L���d"""���P�x�������#F�o�����/�;�={��o�����G�&M��q��6m
U��l4l����Vi�!����J�Qi�L���0{T�V�s�Nt���"�a��C�����K��n��i��{^��z���9���4nnnh��a�6R���h4���@�
���v���9S9,�:���;v��J��};:w�\�����������6n�Xn;�������ev��
��`�
���v��i�����!����G��D-d���x��w0l�0����06������p���~���8p�z��
@��?��c�YD��S�NY�8?�0~��w��1&&��������,���Dhh(������]���B{yy������j	@��#����~��=~�8���������z������T*�#�<�\�n���{*�J<����W�Z�jUn���`���v?�:OM�3�����'OOO��a�����H*�����Z�U����9�<<<h��m�o�>�����DDt��%R(�������gi��}IC���s��Q��d4k�,:�<m���������_��S�u�SXXHQQQ��C�_�E�N����z����,!M�Z-5i��@������S�>}(22����k�Tj��g��B����'���i���n��/� �BAk�����XZ�b�����5k,yf��I*��6m�D�����'��T*��fs6�a����K������N���#�Db	����B~~~4d�����C�QTTu����F#]�|����i��qt��Y����(22��R��Wm��q��4h5i���~��"":u���rz��W����]�vQ��������#�TJ3g����X���������~�mK�����d2-Z��bcci����T*i����~�u�Daa!���[�R�������~[����>�����]���g�w��j��
 �RIc����U��������H@���4e���U�4/��M�>�*���������\.'�TJ]�v�3g�X��t�����d2�d2��������o�>j��5 �BA�G�.3���y�(00�Ppp0}��gV������</BCC-����������iCnnn���F<�@�Yg�����Knnn��SOQjj�U�������Y��m����gjIa�k�.j�� OOOz�����m�������T*���[&���e�(88�2���9s��42#!*�O�8:�
�����pT�]�������T�<�f
�F������Vi�g����DR����...���v�=��?�����0�0N;�0�0�8,�a�a���0�0N@�a�a'� �0�0����a�a��`�0�0�d�d�a�q2X2�0�8,�a�a���0�0N@�a�a'� �0�0����a��O���+W�t5��x�"&O��#F@�����;w���Q�,�/��6n�h�*2SG`�0S{����S�j�V>���;��� ���x�K�.a��-v��5�/FAA����0����a��p��L�0o��
�=����_b���6�Y17o�������aj9.5]�a��x���0l�0��������^�G�=0a��d2dggc��9x�����eK�~_}�T*�y����b���x����y�f��z+� IDAT�������E����S����[o��:����X�z5rrr��K���+pss�l�����e����#((���C�&�	���.F��]�v�������=W�^���W���C���G�-���/"44)))X�h


����"%%3g��R�,SNVV����:u
���x��g��#��{�9s� **
����t:�]���GNN:t���_~����c��a��}�6m�,Y��'O�^�zx�����uk�>}s����1]�t���Cq��u�Z�
qqqpqqA�N�0v�X���W�V`��p �0��%K�`����;w.:t���� L�<��-���b��y�|���~�}�v��	�����y�0a�����[�n��w/�x�	�9yyy���3��[����[�s��	L�:m��A�-����b��������C��=a00x�`�F<������_R�,���3����C�R�{�&�	�
������}{4G�E���q���d2��J�*�
Ri��wvv6|�Al���;w�R�D����z��r���_���C������s���<��}�����G���-���������1|�p��j������z�����l�����*�
J�iii���#���t���Z�������SOUr���;�0SK�����m���h������HDD��_'�m�6���v�J#G�$"�+W�z���-����@,��}��G���iYo��
yzzRVV�%m����P((33��z=����k��fu�Q�FQ��--������uk�������;	��������O���4q�DK���-]���r>��c������\K�o�A�{�&"�O?���J�e[HH����DD�����D"�����=%%�<<<h��%DD�j�*@+W������!�{�n""Z�n �VKDD[�n%V�x��%�?~�m�0�}�@�aj5���
Cjj��Sr4$$��G�4�F�^oI���;|}}����t�t�bbb������������%**
.\@vv�e�G}��������������B�@�=S�s������kWxzzZ�>�����o��w���		�B����F�������G�Z�}��G-������kr�}�A.����^���GQXX���HL�>��6a���
 �0�����...�a�;�������A[2�,2K�]�~}�2����c.����/�x7o���Ti�n������2����e�We����U�VU�_���t\�~e��v81�P��]���H���3g�D�.]����A�a��Yh���]��a�{�{�q8t:�M�)kO��|}}-����# �2�}��g��<{�����C�V�I���.��R*��R�;%((m��-�\�����*�L�~�p��1��u�-BLL�t�����{*�a��� �0u�PgI����W�)�n9{����� �H��IDDD����������|�C���5CVV�����O�>�f��U��������0�L���c���m�i���q��%deeY�_�p��B���w��M�;w��Q;v,���{���Z���~X2Sg���E�F��z�j\�~			�4i����j��$D���$��?�qqqX�`z��������b��Q�����g��������x���*��!C� 88�'OFRR4
����s�����_�r9�'OFjj*�N����x9ro��&�F#$I���9
������k���j�v�Z<����kW���������p��
����������V���4|�������
��0L��6���Z�����0   
6���Z�
�&MB����?�>>>�#��nsN�P <<r���������p�~AAA1b���|t��_��e�/��^^^x��g����J���c����<M�4��5,������*5j����-[��~@��=�\V��-�e��������K����'�|�- s�&M,�7nl���_�>�����S�"22��iS|��W<xp�m�����p�����:t��'�xC���M�������IIIpuu�<����W�a�#�{}Mf���z���z�����t6���h4�h4����K{�F�L&+�l�z=\]]o��0��a�0�0�d�
 �0�0����a�a��`�0�0�d�d�a�q2X2�0�8,�a�a���0�0N@�a�a'� �0�0����a�a��`�0�0�d�d�a�q2X2�0�8,�a�a���0�0N@�a�a'� (((@VVL&S�W�VC��U�1k�Z����;�'//����ID���*S��`@^^�]����^�������0N�����+��D���W#))	���~��Z�+W���3g�P(��sg>2���2w���A�����		�g��

����1s��j;fMq��-����������@ff&V�\��������<��D@\�����x�����?��q5
����~�z�90e�|������CJJ
������m�������0�;M�4)����B����"##��s�Ux�_~���JD�����_?���2��??�����@D��!C��]�r�������7���cHOOG`` :w���C�B�PX�}��w��wo������+V���+W,��!!!h��5�w�~Ge�^�����^�����=��Q��tX�t)���0~��2�M&6l�����C&��_�~6l���a���=�u�����_�~x����t��2������M,^�0�8q"�^���DGGc���6+o��}h��	"""��-[b��u�_�>�j5�~�iL�2����=8|�0<�u��������1{�l��Y�p!z�����x�����_E��-q���r��3g>��l����c8p.���W-���[1l�0�n��.]��5�L�:u*Z�n���7C�T������+��o�q�����<r��6m�)S� 33���HII��I���E�<y�*����q����i6+6n����W[����KX�~=z���^�z!;;������8qQQQx����i��r��1���*<<<�P(����c���\S���S��7�x��y�
,�}������Giii��5k���������C���DD�c�@��_/Sfff&����S�322(??�Lz`` }��U.G�V��KVVT�Ozz:�
����-��p�B���S���t:JOO���yyy��h,�#F��I�&��Q��I�&V����C�������h������k�Z������������JF��rss	�_����._�Lh���D$�O.���3,�����g�}�L��N�"OOO���#=��S����i�����LzBB��_�lI���I*����+����a�����i�,i����R��g�����a�?))����(((�RSS-�R��f��Ui���#�<B={�,�~��a���e�a4-��4�V�"����h4Rjjj��+DD&��RSS+,�N(,,��.((��=^�gCzz:��*����+���M��-���S�~�����};I�R����-ik����#GRaaa��csf��a�^`X���?��5�\x��:|�p�4�����F��'�x��R)���QTT}��gV�h4��3(  �
�d2z���)%%�����C]�v-s�W_}���oO&��t:M�2�|}}I�P���+�1�rrr,�o'M&������k��G!$��h���t��Ej��5 �BA��-��WPP@���
yxx�T*%�LF�>����DDG��:�D"!H�~�)	��P(����T*����DDt��!j������\.�&M���m�,e�Z���6mJ���'�\NS�L��G`` m������������V��w�^@��������R�$�R�h���DD�e�j��) �TJ����k�.�rf��I
�����(88�~��G���m���������H�h4�k�.��"��C�R�=��t:�i��>��C>|�]@"��'Rpp0�������7����L�B������LDD���:���[�Ks��e��dVb�v0::�6m�T�b�w+�DD�:u��Z����+���#�DB�5���wS�V�h���DT,��g�_���~.)���A�
H.��L&�-Z��={�����:x� �T*�z�*�?�F�M�/&ooo@������?[�14m�4�������"!�~�m������:�J,v������C��w't��JHH �JEk�����������%"�'�x�\���OS���+,�F�_R���m�"X�~�)yyyYz�>��#�����#G�Phh���;w.)
���_��(>>���kG�z�""�������-�������G3g�$"��S��J��#G���(""��W�*=�r��������~#�^O�f�"�DB��u�'N�^����������7��O>!777�9���QXX�=�Rnhh(�?�rss�����l�B2������j���s�Q���)33��z=]�~����h�����hH������I.���s��H�H+�J��������]�FDD'O�$�Lf��1}�t
&��@Z���,YB(55�4
��y�\]]i��i���(//�&N�H����V��H�@���;��(���wY��	�A�.�hDQ��BD��c��XQ�Q��[�F#���D��X5��
����4��������8.�5y}2��.��{�s���w��9�u���T*���\���a`mm-]�z��z������������������������2d9::��c�>|Xe,E�~��'""rpp �&�������3���.$WWW�?E}6%���)""���233�����������233����455i���D�����'effRUU-Y������������S]]UUU���cISS�>|HDDc��%V^N�8A(++����
F&&&4v�X***���R���#������t��Y����={���g����H[[�N�<ID�^;'''��urr";;;�5k]�|��������BBB���M��U����bcc���KE�f�Rz���ZR����@i�LMM%�HD6l`lL� ::�%[�nM�&Mb�>|�iLb/>>�����L<�����������h��E�[�n%�@���@�PH&L`����Oh�������S,A����4�
�N�J���D$���D�|�r�OAA544����YC��-"---���al2��,--)**�)����
�-[F��uSY���d��������[�o������g�G$�@���'"��'���9k�-%%��!`U
q]]���-K ^�x�ttt����DDYVVV2=�_|�������2��?'�l�2""���a]��7�s�������O���CD$�OuuuY��G�%J������T*%ccc�<y2S����3��"=�P��i�1b������w��A��Z���a���q#=z�d2��������G����DD��S'F��UT	@mmm�����>�����G��'555���~���
�%����w
9��a���1bbbb��g�1���+}��C���O�>EVVbbb��/������-BCC����|����OOOX[[���������k�X1�={�T������������'O�0��F��-���c����;;;%�bZkkk`�����GYYn�����
��������X�>}}��E�~�X�����7������|>YYY,��e;~�8�����y�fL�2���CHH��������];������BYY$	0�z���������}��u�:JKK��������TWWc���������C��)b��u����O����g�_�T�2�b�"�/��o�_q����]r���r��bl��	�|�	y}XYYASS�����h4��_9�������?dgg�c�����b1LMM����7����-��h===���e2�?GGG�1'N���B"� ##�u_�����M������PTT����3����+�M�������r��(�o�����^����Cs�#}�2Gp�����#..7n��Q��T*EUU�A��V4���_&22b�0l�0l��	h��-�����W_�b�:����&"##������$�;w�����d`c��4fS0�|,Y����������P__���4���/�������5k�`��i

��;�������<O���������eSR@.J/\��$��f����������� �����={�D�N�cccH$���0>�����(��*=z�~�����iii066f���3B�AAA��� �2�999h��5444�������O3�:::���f����eee���/�mllVVVJ���c^���HOOoRP����9������y3����,�YQQ---V--�F��-Z��555QYY	@���j���Fyyy3J�'��	���|�u]]�>��x���d�����+LMM��������'������U�v���f��6
n�N��'�Y�r%�,Y��G������O �e��(**b��={����������Y�T���{w���"))	��� "��������7+���}��^�L&���+1u�T�^���?~\����������0			�1c��o��m����j�����g!
��{w�}�����s'��?���;�6���������O3���,V/*��[U<{�~~~pqq��?��$�=���LV��\�x�����VSS���k������e����c��Obb"ttt�����?�/_�4��gee���kX�|y�i���#G����r���#uuu_[.###����n���mt2����x�H$���#������tL~~>s��x<��W���ajj
>����|����
�033�@ @PP&O��F�����L���L��G1��~����]���o�����~�
�����o���S���������op"��={��<��w����Y
���W���Z�o���8x� ����A�1
����k�.&���_���k �J��uk����C���������?0b�f����ann������X+e���999�x�"c��d���Fzz���?~^^^�^�C�a��
HIIi�����FFFL�L��k������3�_����B�8Us��Ldd$��i��;w6�������������������T���"�0o�<����W�V�>v���o���������)S```�!C�(	���<���������ov>������t�?KK�7*���+����������nt��C�����mt��
�\ ��������s(..f����1rssYs%������+
��K��������sgDEEA �k��HLLdD\�t	s��mrH^����U��Jpp0�9�����w/444���3��"\�Dzz:V�X�s����0d�r�2n�8��;:::HHH@BB����p4��OG@@>��xxx���3�pS��_�z5������������~�-Ya��a���GFF�R���
������O�>EBB>����:c���:����+V@&����G�Att4bbb�d��5
���<<<0d�hkk���s(((@xx8y���m��>}�`�����������������������1i�$��9q�"##Y���X��b,]�T�������Q����?��#&M�KKK8paaa8v��n�
333�=�V�������������P)�SSSq��a����}��v�j���#�N����BDGGc������������������O��9sccc��� 88vvv���LLL����'N�m��HIIQ��}�R�) K�
��
���X�h��������b�I����Xsssl��
���L��������ooo�3X�~=���@`` �,Y�A����iiiLy�d��U�{}�� IDATV�w��������'N�<���r�}��k����~~~DAA�@u����l��Y�~P�w�^�����+�J����q�����c���O?����A"�`��]������Y�����oG-...���G�(++����!�aoowww��b��b�k�())A�����?'''XYY������x��)$	BBB0i�$�Bx{{CKK�Z���a����dffB]]S�N���3Y����ZZZpqq��Q�X���������w:::�;w.��������G�M��USSooo��r"BCC|}}add@��	�B���BWW���PSSCff&����f����COO�^^^����P(DVV�={'''$$$0yqvv�@ @ii)���aii���P�m����(((���'�������d000����|>*++�����'����)SII	�'ggg����b�x<t���;w��{�PYY�	& <<nnn())A��m��S'4���x��1:v���+W�������[��������{�Jy100@�^�����z����S�N*�N*��M�6���V������W�^

�H$�D"�L&���'��_����SSSL�0���(++��/`ff�i��a���������#����~>>>M�W�2uuuprrz��P(���C!�P__???DGG#>>���pssCmm-�����W_���;�u�\]]����\���.455q��]���c��IX�h�����9�������D���c��	������������agg777&�


���A�^� �`aa�����������l�����n�())Aff&444�i��1�S[[WWW�o��IG����������<���@,�S�N���3s-v�����PSS��a�������,���`��%1bD�������&����pksppp�'
������ n����;6�e8��������R�o��:u*jkk�x�b��������h�����=�	@��Hvv6V�Z�[�nASS�;w���3abb�Og����_'98888888�ep��pppppppp��� �?W�x��"�����v�JKK��)���Xs��)D����uuu�y9�^]-��������!"|��w���c��}���Jl��7o��H$B�=555�1:���@f��|��g�?����?��g�`nn���,������[�l���w��E���"((��J���}��Ejj*����w��t+**�������c��a��(o��	x��9��"@~~>,--1j�(��jI�R|��������CU�}��a�?�e366���=z�����+w����}�p��},--1h� �Z�/SUU��{����K�H$066F�=���`ll,z��?��[���o�e���S�����c�����������0f����r�@~�|��w�w����������� //�}�����l���o���������'N������K$t��	_~�%

QWW���H��������r�
�����Q'N����lll���C888��~@�V�PZZ�!C�`��)���c�����3g���~xg�(**��
���5k����>���9RRR�����W�6c���X�p!~���&�:u���Y������/� $$;vd������i���c����w/�����E�������G�V��LKK���-�L����b�������?~<��o�k����W�X����_�+11�}�S�{��a������������������������9�����3g2>7n����=���sss<}�}���������sp|�����!EEEQll,+��5kPaa!c����	eff�|_�xA���DDt��A@O�<Q�Y\\LUUUo����"���V��������T)/%%%T__���DBuuu*����Qyy��}��5��k�F����!�D�d^�������������������������������������l�B�{ee%����L����


H&�Qyy9����QVV�������������_����:������p����_'rss�O?���s0c����W�?z��Z�jEAAA�->>��|>m��E���]$h�������$��������X�yyy���J&&&TPP���|>-X���<7___���V��������R2����W��u+���Z��dTPP���BD���@*c�	R���k������xs�6H$��d�����[�n$�J�������/������\\\X���i�H[[����s��5���w�	��_�����7o����T%:}�4UTT��������������u�XP&����_244$�HDjjj4`�z��9�9�>��c��'O�L...���@5554e����'�HDB��"""�?�D��


d``@��m#___@jjj4k�,���?�c���D"}����q����������6��|RSS�O>���?����K�.��������H.\D"	���g�"":�<9;;�P($uuu����01�n�J����}�vRWW�)S�0�066��{���+W���~c���������K�f�"---@b��F�EDD?��3������dmmMG�a��?>�D"��� SSS��g#8@zzz�ttt����***���#��"���`���b�jjj�S�NOaaaYEFF���)������.�=Ze�)S��P(���|""�>}:ijj2�����Ejjj,1�:x��JJJR�S\�� Q��]�_�~��7�|C����M�6t��Qrrr�5k���p�����U+�z~Y�����H]]�����}��t��1��y����9Cb��rrr��h��14b����/�E��Z�lI���g�����3f���s�)�"�p�����-[2�Hpp0K,������K�g���������G$�i��mM��?���e��w/�����t��
���?�H����/����j?PT	��X�z5���2/^L����PLKK#�\�l�D"JII!"�����3�������	��umm-�����H�T.�)--���222��������@uuu�����'ORmm--X��x<yzz���W������C���Lo��U�HCC�)��������F�������1c�Pyy9I�R����IMM���=K���4t�Prqq���b����'O����.�3�***�������H]]�n��MD�FZKK�z��M�����k�HMM��j��YdjjJuuuTYYI_}���������O��P(�3fPMM
UUUQdd$�h��JKK�H����[GR��rss��������t��UFp��I*//'333�;w.�KnnnT__��p��!���HDD��#t��a��e��������������L�K�.����l�N.\��\]]U����������""���$�G�g����J���$www�����+W���g�����IUUU�d�@.\ �?�/_NuuuTUUEc��%MMMz��!�;�\\\Xy9q����,""6l������c����JKK�O�>�'�?@���g�������C|>�y���#mmm:y�$egg���K�:99�����5�._�LUUUTPP@!!!t���&��e���(00�Y���2|�prrrjv�w'9>@�����055�D"m����9880
����h�l��5M�4��s��a�1Q����xfrr2�x<����������E��bl���#<�#�B!M�0����>��K�2�S�N�ivv6�x*�:u*�����C$����Y>���@DD�G�f
/Z�����������d2������(�|/�*X�lu��Me���I PRRc���o	3�]ZZJg��e��|8�?���&N�H���������F��U5�uuuL���,�x��E�����w��eXYY���~���������<����e���HGG�u]4Fxx8K���!���b�?>�C���>���e
�=z�(	��2>R�����i���L9��o�\����B!�9p������b��c`�jMLL(&&�g���t��Q��ddhh��'"��o���Q�N�q�W��{7���R�V�h��������w��-�����p�#p��D�n�(�����{�b������g�}��?~��QH���K$<}�YYY���a�/^���w���

ERR�%rbb"<==amm���o���
��]c�x���R)rrr�����CJJ
�<y�l<-[��k����������l�iM���QPP� ??eee�y�&***jjj?~<bccq��i������c��*7o����6,X����|dee�l����u_7o��)S�`��u		Q������k�~�YYY(++�D"�\�=�����?��z�n_Gii)BBB���?~������j�9qqqppphv<EL���
4440}�t��=��J�R�qS)|�|~���c^>��3g�(�K,c��M�W��=���455�F�������������c��D:v��JO,���<x�|��������� ��\&�����pttd3q�D@nn.$	222X�uqq1 ++m���|�	&&&pss���>�?'''Vz�|6�%K� >>;w����w��=
�����8yR������_������}�.Gp�����#..7n��Q��T*EUU�A��V4���_&22b�0l�0l��	h��-�����W_�b�:����&"##������$�;w�����d`c��4fS0�|,Y����������P__���4���/�������5k�`��i

��;�������<O���������eSR@.J/\��$��f����������� ���_H����:uBHH���!�H�����������T��x�������������7g��B!''�\t�d2����u�����h4���.N�>�l�������U������R�*�X���XYY)	���y�n�"==�IA��o_�����b����>XXX�gEE���X1����v[�h�����Dee%����q@[[����(����'@.���Q��uuu�����!22�U�W��&xyy���p��|��'pwwgD�T*��1cp��?~�������v���;OM��6
n�N��'�Y�r%�,Y��G�2D�l�EEE,��g���ASS�5k��t�w�[[[$%%���D��`YXX���1|��f�����o�����dX�r%�N����W3����+��������|�� !!3f�P�m��-tttX�~s8{�,�B!�w�����9;w�������s�����q#���q��iF@\�p��cbb��E�u��g�����...�������G�����uU`cc��/������jjjpuum2}www�l�;v�����ILL���S_�����������F�+�����k��|��&�~�#G� 99Y�~777F�����\FFF��JoSnnn������D"A�������<�c����{���)���z��SSS��|�����/^��P(������0y��7��:����c�t��������������hhh��Q������/�����
���%<�����;������~Cll,����$��������������g�f?��C����{�nV�r��U��������:t(<��{�b��AL���GGG����I~��W����l����H�R�n���=|����cz8����1�:�
���93���c����G��������M&�!::���*�s��qxyy�z}:�
6 %%�Y����1qd2��]�j*����~�:
�����^;22m�����;�������f~aaa���OAD*�_s�D�7o������W+]�v��7�|���hfXz��)000��!C�K^^���aaa����7;���Gzz�������������g�w���_+
�k�����}��u �����z=w�����soll���\�\������
�Bt��{��a��u��QQQ���+!����.]���s�69$�x�C�*�����#>>111��?y�$���� �8����8y����?��"==+V���seee2d�x7n���$$$ !!�u|xx8
����#  �|�	<<<p��F�)���^����������QTT�o�����,�0l�0���###C��d��
@@@z����O�"!!��y�
��B]]}����+ ��PXX�#G� ::111X�d	F���~�
2d���q��9 <<��gk��m���B�>}0x�`����������add���������4i����8q���,[ll,�b1�.]��?q�D���(�������&M���%8���0;v[�n���F��U�V���aaa���AAA�����8|�0z������>�@�]�v5}��S�NEaa!����u�V|�����xHOO��������c��9����1RRR;;;���&&&�����'��m[���(
�o��]������������P,Z����L}hii���xOQ����9�m�ccc��|���HLL���7�����
�_�@@@ 00K�,��A�������4�<o����U���wo������'O�Dyy9�����k���???���			�@��I������
6o��t?(��a��]�����{��E��}��wo�{G__���J�����\1�����Z\\\�?�	��QVV���C,����������h��PRR����3��NNN������-<==���SH$���`��I
�������Z�j�a�����������:�N���3g������������Q�F��YYY!44�w�ttt0w�\�7�����E�=�|O���������D������������
�������.������La��5������$	���	�P���,<{�NNNHHH`�����@���R8;;������h��-233QPPOOO|��7033 ��300����|>*++�����'����)SII	�'ggg����b�x<t���;w��{�PYY�	& <<nnn())A��m��S'4���x��1:v���+W�������[��������{�Jy100@�^�����z����S�N*�N*��M�6�zA����W�^

�H$�D"�L&���'��_����SSSL�0���(++��/`ff�i��a���������#����~>>>M�W�2uuuprrz��P(���C!�P__???DGG#>>���pssCmm-�����W_���;�u�\]]����\�"���������w�����&M��E��w���������|H$<&L�����tttP__;;;���1ylhh���z���H������O�>E�n��e��G�u�����@II	233������(L�6�����Z���2�v�� >>>�������CDD����X��?��/f�Y\\''�F���]�����Yu��i�G(����������(,,Drr2���ar��mt����/�9888�7�����xO����}��ppp���SQ[[������7n��P(�������/������l�Z�
�n����&:w���3g��������Nrppppppp����������������	@��1jjjPZZ����X!�},�d��%��ECC����8888��y?@v����w�b�����ohh��-[P\\��~fS\�x������N�_����wc��Q���o����s8r�&O��r*�;k�������x�wArr23�����#99&L`&����Bbb"�����z^]�����HII������];L�8��f�} �J��_���C�e��v�Bdd$*++q��8::���G���G<+ IDAT�����###8���*c?y�_�5����[[[������T����R������(++���	z����� ����9s)))x��)�B!1d��m��������O�"**�MN�Jjjj�~�z`��1J��k�.����PSSC����^������8>���i�����NB��Q�������S��m�����bccI$�p�����k����/������>|X�����f��I��^�����Cb���d``������s:q������t&L�@DDt��U���&ooo�;w.������&]�x��_�n�x<8p ������=�����g��[-ZD��O?e�;u�DTZZJR��RSSISS�z��As����� ��x�~�z��/_�L����BBB($$����Kfff���M[�ne�>|��b1���Ci��)D���@���c����P�>}������/M�<���CVVV$
i��5,���j���[�19������H�����w�F}BCCI__�>��3����H$���s�I��8�accC����%K�4*�R)���SLLEEE5[�=�<==��_~��P"�PMM���


TXX��0�1cu����;�RFDD���/����/	���*))Q������?�d��������M����Seee�>���M����RAA544��)**"�D�dSXVV�T���:*,,l2�7��������I,�������������Yy���������u���MS�La�������9}���J�������Le�uuuTTT�d��_�N:::����$���Yb�k�����E2����9�LMMU�W��������BCCI$1���[�H$Qpp0������������lmmY��'�|Bb�X�oA}}=��=�PRRcW0;;�Z�hA_�55*������������m�h��a$�J�:���I���m������JDDk��mT�d2:}�4���4[�9s��R)]�~��`EE�;�tuuI$�H$�1c�����"


%---�D���C3f�`��>w������T
���O�L&��/��,--���
�����i�����
<����I$������Sii)�����m������������+�����gV,���������e�+W�0>:t ���F�����Z&?'O�$GGG���$
�U�V�y�fV����KVVV���F��,,,Xej�������Z�`�x<�����������{w
�$
����~��""��� �XLd�z��	Pbb"8p��b1#L�R)���P��-I$�@ ���`f���'��3��UQQAFFF4p�@V����h�"""��>���\*))!>�����k�.��x����\����,��3g�������K���	���3��������?&>�O|>����h��eJ�����:u�D����@�DBb���|>�D"�����Ig���;w��b�[�����������S%��'���g��(<<����������4@�6mb�~��S���@��u#{{{��:���OIII*�o�&"���<�\8�Q8d��������	�5� �U��J���@o*G�I�Z��k��1���������}�R�v�(33�����?Ozzz�p�B�xM	@o*�B���t��-����e���@ �����������n��EDD���;���Qxx8C1��{�n����7nP�-h��iD$o�P�v�(%%���������������t@������T�����~���[����>|HZZZ4r�H������zZ�f
�c��1�X ���3����jkk)&&�����zh��9s&YXXP�~��������G���dbbB���b�����7��c,>��#f�U��U�H[[����i��=�
��(..��������D$��qrr�~��1�puueb������5ikk3�8??�x<�9s���6n�H���D����������k��>}��\�B���K,�������:����[7�����������?��tuu�ab'''���'�T*�m��:u�+nll,���Q}}=K644Pqq1���QTT���
��?�Xe�5%��g@7n� "">|��XD�^��� ""Z�p!h��s��U�<x@D��/^$WWW�����*hmmM���t��%����Y�f��#G�,����"�U����,..f�k��!---����{��)
��=�Z�n��]�������m�����d2Z�x1]�r���_|����3=�B��F������?�������=����K���z�*���x�O�^�����������;"�h����a�����������kX[[K-[����("j��={6�x<z��cKHH �G��?g�v����JD��_WW�����������L&#CCC�1c+��}�=~��N�8AjjjL����3i���dii������������HS�N%"�`�<`(x��!����^�xA���4o�<�y���%���sF�+D&���8>>����G2��.\������:�x�"������w��XP���9+/��j�*����+W���iL������W�C���sg������r@qqq*c��������h���$���?p��|��w��������A}���7��Njjj�N�8��!�}��Z��?���f�w������T*EZZ�>}��{��������S��qp��1\�z���s��>}���*hii��<�9s���c������B&��������|��3���"B��Y�:t����z<y����%===<x��e{�G__���,�G}��vpp���W�h��
�����<������a|���YK�	�B���*��-����l�k����h���!�k��7oB$a��5����������#::)))��{�p��U�X�B)�g��A"� ##��5zqq1�������D8�<����_����CII	N�>
�:u
~~~PWW�T*�����s�N��4�4=���.����������8t��������w��Z�����y�����;��_~��u�PZZ���
���2u\]]��#G"..N���������Xl�����C�.]^{L�~��l�z�BBB�|�L_R���i+|y<^��0������p��u�D"@��]1m�4�?VVVMGq7�*'qR�����~�{�4>�G���o���)8��Z�?�]�v1�VVV��� �������������H�x<�����b��n�������yg���#��o���}{TWW��WQ�������+�TQQ��^]��1���u]�b1k[$�������yQ�O!���y9�
<x����3��f���.������UW��{wfJ333����w�Fpp0~��'XXX01_��97�����H�h�������g����[�n���eee��c����S�N1������������?���S�Z�l	�;w.lmm����_���
CYY���}}}�T]+���+���1v�X8;;C$���S���9s 
����
�d2����u��*����"<<�/_��3g��������_���wXYY�E��~---�j�������f�X[[������h���J���iV>�?�s��������:4+���)��������<y2�_��ZX�*���I��Q�����6�*T�������%>>���,��G��G�F```���O������b�
�^���o�����S'�^RRf����<t������[������s�������Lz/�����K��m����7���G%{��=���u�m�D�M�65��3l�0L�8UUUHLL��#�733�@ @PP&O��2^�>}���?���
;vD��-�����>����x�������?~��w���y�@ ����Y��y�&�|>���[hh(BCC��#F������S2//�%����������dddd����LO�T*���������������Rmllp��E���+��J�2d���q��e����<W�bkkWW�&}>�����EEE� ~�. 77`�ccc�}�v��5K������WWW���5+��>��Jddd����%$	�&��4�!,����Kx�!��4&���[	�_FUU�^����������M�6�P���b��U'''�h�B�g������o�*��`ooCCC���������6m���


���?�����_`kk�^&2>x� ����z�:u�.��uCAA.\���<�iii�p���n�������Dfff���M���***������.����[p���������`�������+�.]���s�2��}����������^(CCC�X��:ub��'����P���.�������uuu�=���=z��
��
60������o�w777������[�n����Fmm-���u�PWW��4gdd���U3���0|��� "�����_���K8q�����������)�����5j���q��cG���a���8v��_*�"&&���X�ti��������t�����f�
��#GX�<���pqqiv�;\�BFF,X@������7���M����v�Z��K"�0>9r$����.]�`�������P�|���_�l����HHHh4?jjj��a>��S8���x��!@>,�v�Z�7�nnn����w�}���G�FEE�.���K�m�6���c����J������P_tt4�b1����VDP������Wc���(//G����kkk�5
B�k�����#���ggg����������_�ZE�����5
�;w��������?�0p�@�������2e
444�e����0=h�G���m���woL�4	


��q#�w������U�<==����C���	��m[;viii4h�������`,Y����Us���������QPP������1�O������{��a�q///���3f� _����~��_~�Jc��������OOO\�t	HM����x<���"**
7o����~��'����Ye������G\\>|���[c��m���F@@�={MMMDDD�o���|���c������?���/��t)RT,�,B�F�-�`"jl�����cA��v�+�A�A���������9. &j^���y�y�s��9wf����w�����������������9������Kadd�3f��GGG7�*NK�����c�+++���:::�����s�������CII��g��-���?���������J\�t	�����mK���M����30`���z��!8pp��-���	&�W�^?~<���///�1��w/���[������4<�9�����@zz:D"�������H�H+++�o��������H$���=<<<;;;���A&�AII	��w���!�����Dh�����abb�Ht��������������B[[����X���YYYHKK���!��[�����hhh������033�H$���.z��"��{� �`bboooC$������M������}�����������?6m��t!:99������x��	���A�@MM
|}}��uk�&�acc�N�:�� �JY�����}{899!''�7oF\\������
6���@�P	���)RSS������@l����������0���"))	���~�tI$f�TSH$�n�^^^,{@@���S�x������y�������@ ��	Xc��R)�B!z��	eee��� 44���HNN���
f����3g��Q�������89}}}������>���>�>}���R|���.g}}}������999pqq��-[X1{{{�����������q��z�j�����<==���������`��uPRR��������/_";;���������'�������hW�X,���kBmm-<==aee�D���*�m���������3���d2�����G�F�u�����&MB���QRR���rXXX ""+W�du{�]�#F�����b1


������@l��I�;'�affccc����[�3�999(..�H$���:t����:���


���RSS����+V4���X���o=>���#�DWS�����{�.������ww��;�8�@��������5:����������������e98888888�ap������fJ��M���cA*�~��|(d2�
��7�'Hll,�<y����7Z.���u�V��`}��_��c�0y�df�����{���~�3����~��7�>}��Mc&�`�n�:|���r��}�?�a��!??��L���q&M����������8���������r��^�|���x��9lmm1y�d����%	�-[�9�����'NDee%?~deea��HOO���>��]�6Y��!7��@ ���5�w��V�Z��SRR������(--���!�u����fN�7�x�"������(++������[%'../^�������5���/�}�v��������wG���Y>999��};�>}
]]]�����9988��8>^�zE�
"%%%RVVn�'##�|||HII����ZTomm-��?�TUU	]�z�}��V�b1-[���S�N��TWW��9sHEE���;w>j����/'�������t��Y�������4i�����;wH]]�|}}i�����W/RSS����3�111���h���AvvvdjjJ/_��`1.]�����CYv'''�������D"�+W����u���,X@�����~h����D@]�v�a����a��o��dllL����m�6���S�H$���>�9��O�N���Bm������qq1�����|>�����M�F���#KKKRVV�u����CCC����/�1�{���P($�?>��������=����>ijj�����?��
F|>��-[������O���VVV������+������)""�f���b8v�X���+=z�O������i�\&�Q~~~��`��Y��cG���_�������G:t����555T\\�dyuu5���6�"���**++k�����*++��)))i��������H&�5�SXXHr�w����r��������f���QUU�{�����v��ADD��w'___V������K"��f���4}�t�����LMMi���ruWUUQiii���������f��w�ihhP�N�����)-\���vww'�J��m�����U�&�o����,{mm-���*s�?z��TUU)((����Y�iiidccC��������$�����b�7o�0��%
D$�H[dd$��|�{��W/ruue�/�9s&�������$2�o9,�_������#"����7*�R)]�p���"""Z,/^�H������N������O������J���4n�8�C�������I ���*ihh��Y�X����7�������&��H*������I���Phh()++255������2d)))���*���PHH���Q�p������wSPP�x<@������MD��Y�HD��#???@�G��C���[����# 
���Z&�_��������2��M�Xm:t�YZZ���)**���9�M-���_?���$�G�����������sgRVV&eee266��;wQRR�D":q���?���ttt(..��;F"�����"""HWW�TUUIQQ�������]�����***H__��:���'-]�����={F(;;��������e����K<�^�z���W�\a���3�,--����l�����|@vvv�w�����'I� IDATooo������IKK�V�\)wnkjj�������i���,(( �HD|>�TUUI$Qrr2]�x�?~��#&&�H,7z���D��'��K���($$���_��]#�q�FV�o�w
�d2��� ;;;��6���+:p�@��������'O�>e����/--����^�J<`�����PQQQ�1|H��4�O���iJ�����U�=�������������4y�d��o��dkkK���DDt��e������(����
��>|8�����G�����V�\I���t��="��tXXX��G����{����BBB�:�����O������
�4s�L"�� [[[������:�|�2)))1�����������+T]]M���#EEE���!"���L4z�h��� �XL���#��/�0�XQQ����C���T[[K����\���9s����9�����K�(''�JJJ����LEEE$���$����pvvf�^X�v-���Syy9<x�P~~>-Y�������_%���S�v�����������+>>���iC����0~���x<�x�"�������@D�?\�v����w	]�p�n��E���,�U�V����5��� ///JOO���
�:u*ijj2������n��Qaa!I$����	�?�U�����S�N$�YP&�QQQ���������C�uH���M^����#G#�ttt(<<�������DDE��r�]��PFF�]^�~������4%6���h���������""
�v��5��	'9>E�����A��K���#�@@555���"�UDD4o�<211���}���\RPP�����I�RZ�|9��u�
���-[FJJJL&SYY�>��3����W�^DT�E@���c�t���y���s�Ptt4��g�����ED�����;�z�`ccCaaaDD���_���+kX[[K����8���y�������,��y�f��x�����uww��#GQ��_SS���ussc���P*������5�U����	=����=K


L�u��94c����`2v{��%---��o���4c�"���
dff:r�������-Z��u^{��A(77��
"��>{M)))$�J������i@[[�uM�_�N���""�l�������]��������[M�4&�b1��s��C�$�������%K�������������O"��Y�c�������n`�����G222��#G2�����u+)))1?R�8��)����V._����$f�w������D"��k�����,##UUUx��<x���_p�������x����� �K�/^DJJ
�������LH�R8880v>���]�"B���Yu9::B,��?����-��hii!##�e{�G[[���,���3k�m��8v� --
�[����6S���������t�����������2����������g�mmm������c�����C���b��u�:������
		�W_}���x!%%w�����������K ))��6zQQ��7w�v�
UUU\�|��s��h�"�������������%%%H$\�p�����izmzMMM,\����(**���N�<���R����'O�u�hiia��E�v�p��Q���������e�quu5F��%K�����d2����
6��������[����������7o���"�����$�[������Z����}chh�N�:A[[			h�����D�+V ::����}�6������O�����7_B���k�pv���kupp4'9�JBB����l[ZZBEE@��s���L���6&N���<$���X>&&&�8q"jjj��<}�4>�l�������X��a�7�gk�������.������>�!�X�������b�il�8uuuF�5��z�
ddd`��U���!C����&�W&�AQQ�u��s����.������'������ �������>|����
�x<L�8B�***�����K�����G���������g-Z����3�=11�����������)o���������������s����R<~�����5h�^������;���1~�x���@UU���g|,Xeee0����R����011i����Z��� 11/^D�N��{�~��;w(**���B��)022b|S���1��i����x�������++�������~���r8::2�>>>�5>}�4���OOO�>�H$7nN�>���t���Eq@U��-����_K���z����Nr����hDGG�lYYY��c�b�����������/[�@���^�Z.������3�����CCC���0'':t`���{s���A�9i�����^�����������O�.]����uc�o���D��76��	����QUU���8|��g��CQQ�6mZ������v�B�N���}{������_|����������I�$$$�s�����P�AUTT�����j��������];����`f���>����9%srrX������8~�8������C&�+�H0j�(����3HNN����\���p��uxzz��I$>iiiHLL���i���M��������O���q��A2��u�^����l4���?>v����s���8777�(�7�����akk�={��C�pqqa�z����������2�c����k�p��uX[[�(����?��r��������V��QUU�;w��ey�SSS�n����k���X�v-�]�v
�r>�����e�_:~K����������������U+l��VVV�����#GX�=z���d"�'N0��b�?�.������W���\\�v�����Y�����HNNnT\�]�tAEE�?���QQQ�p�c����������O��g�5Z���"����T��o����2��}����{�p��qt��@}JOO�W����#���=���WSS}������QWW�>��c����::: ">6l`�{��)>����S'())���G�AOO�N�Bmm-�OLL����LsRR�~\5�1b�
"j��|��w�q���=�N���DDD@&�a��r����4�3...�8n��=F����(���/,�D�����}�|�M�c���������L�8JJJ���FDD���_����}�v�8q����;�?����~"$%%!22@�����P���9s&�_������x0>������������;b���LYpp0��Y>l���������66o��h<


��a������������y3�,Y��[r����0a


��S'$%%a���,8v�XTTT0c��������PRRBll,$	BCC�W�W_}�H}}}�PRR���~���G������8p���i�1c�@YY�������!������+W����S8z����Po!==c��A�p��	���`�������o��

���������[��������;?��3z���)S�@&���D�����b���+F���#Gb��I033�/���k��!00��SWWGPPV�X//�f����������0p�@���a���1b�����
���q��A����All,f���~%��7o����cc��5������7�v��7n ))	W�����xpss�������C���]�t��q���9�����d�dff���?��3|}}��_?�|�jjj

E��}����???�������~�s^]]��K����3f��+���nrU��bmm�c��!$$VVV���ttt����s������������-[xyy���	����t�����m�6��7����FW��1c��X7l��!C����]�vEAA:��}��w������C[[_}�������ge988�FaI��������
���C$���^^^�D�D���B���Q\\���l�D"��������������d2�����{w����H�V�Z���&&&�D000hvl���������Thkk#::�k����~��!++iii044��u�X��<x


���fff�D���E�=@D�w�D"LLL���
ccc�D"����������o_<���������M��.D'''���#77O�<���9#�������/Z�n���b1lll��S'�R)���c���o'''���`���������:���a������SSS���";;�����a3^OQQaaa���ERR


�o����(�H��U+fUcH$�n�^^^,{@@���S�x������y�������@ ��	`ii���R)�B!z��	eee��� 44���HNN���
f����3g��Q�������89}}}������>���>�>}���R|���.g}}}������999pqq��-[X1{{{�����������q��z�j�����<==���������`��uPRR��������/_";;���������'�������hW�X,���kBmm-<==aee�D���*�m���������3���d2�����G�F�u�����&MB���QRR���rXXX ""+W�du{��YG�???��b@MM
�����i��wN,������r77�g6mll
�L�W�^A$a���X�|9s������]�F�����]��9Z�
�
<�Ot5u�O��w����
���pww�����������������������	@���@ ���[�S�pppppp|l�.`�\���������'9�+��(..fV��X���2��q|���]���������qpp�o�us�3D�]�v!==QQQr�UUU��m<xeeexzz"44����1~��W������urWsss���a������4`gg���WC&�a����z�*�������q��AKKKn������[���fM���D@@$	N�:��}JJJ�r�J�MEEVVV���aV�y���*:t7n�@AA��KAUU�����y��GVV`kk��C�2�[7p��)��u�kF0�D��7B*�b�����������S� �J������p(((����<yW�\�����1����x'^�|����c��	r�������b��5����T*��)S���?�X�.�W�^ "<������	���p��!888 77�����!!!		��U� ��������j�*�?���HOO���w���_���k��a�_�v
����>}:�����U+������?���=����������#����.@[[***��g�p�B�w���������K��>}
///��=;v�h�g��������f����?��O/##��O�L>�;B����	M�6�"##I(��/^���B!���0�}����>�����
�������RSS��,..�����waa!UWW�����h��-�����JJJ�,������\���m����
���h������x�_QQQ�>���$���m�TJ���TVV�N�j)������RUU]�~��������r������(����Jaaa�����f�}ee%7CYYY���J����OR��I����F���A��_�~DDTSSCyyy��CD���Eh��rqL�:��|>�������I$���/�]i ''�������������q�HMM�N�:����d�f�@111�}��Y����l�-�����B!-[��&L�@NNNr>�������������4x��w��K�R���!///@����
�����.��N���111Mv;����022bl
+$����_Yc���������1:t����TV=D���h�U�V
��J0y�dt��I��s��E�v� �� �1g�������B������w-%??C����&�����MV�_NN�Pssshkkc��q���P?�PGGG����?444���fi���T��� !!�������@ ����lxxx 22c�����&ttt�����2>vvv���A��������v�Z���ZZZpuu��7��?G������?2�����������7�|sss���l�###�'$$�k��PSSCAA�����������s����v��-�:���:u
����D������#.]���������5�������N�:���{,�u�������������_~��Q,����W_AKK���PSS����?�����a��y���b����g���)�|>&O��L���o�_J���qqq���a�!..�r������~����9g�4������o���l����TPP@BB.\�dwnll,XK�
0����!���###���yyy�����6������/�`����C"?�����V��&�|�M�������EjjjL���o�%UUU�p���u���i��~��w���L��#"�?�������s��DDt��i@<`�#�H�������GDD$
���KDD���B�������`����C����LUUU�`�RUUeb�����m�2�7n� �<y2�g�����8q����������BK�,!"�g��j��=]�x�������3������~j2�������-]��***(%%������155%Z�d	%&&R]]�����B��b*--���@���e��7n$MMMz��9M�<�����������������{Y�t����
�l���������>C���G�f�b��g��%>�����c��F3�>$eee�7o���PUU}�����A���DDt��y����f��}#F� CCC*--%"��{����28p�������+$
)::����n�J|>�~��"���988��A��X
D���4i�$***���\���'S]]]���� ��k�s��m�R�^�����;���'m�����������g������3�)))�����g��r�L�4��`�.]h��1���C����Y�f��C�XY�o��������!!!������&%%���]�z�.\���g�)��w�r�)Z*o��I����v�Z����BAAA,���H������c��|��;G���G$�����.\��'$$0?�XLZZZr�.66��|>�����`�0;z�(c������hz��!%''��ok�9s����&�d2���$4}�t����7#������������M��d|NNNdcc�z�6������]�ta���{w���i���w�&��.����S������+���L7o�d�{��M~~~�vvv6��|���'��nS�w�����#���%�HDfff���!'"hJN�2�LMMY]�555���Ik��!"��� �&"*(( >����k���DDqqqL<�����o���###IOO��4h	�BV��/_&t���F����J����c���YXX0C444h��I���@HH�j����-ZD<������OLL$GD���u��VVV���Am�������IUU�F���H$���+�9�N�:Ejjj����\,����C����_�����?~����6mf���������o_����#�wEE���`ff����^YY	HIIA�v��`��e���8��������=Cii)>|��#//2���=���+���dff2����'O�]EE�-�>}��}{�����#??��hkk3mj�%>o��}{�x<f�!������������'--
,[�6m�������]�[�n���
�E����'N�����#++���8p������W/��s����ggg�]�S�N����.]
�@��'Ob��������K��l����PTT����P��BEE�u�|RRR�?��I�X�������q IDAT����(,,DTTrrrPZZ�'O��M�bgg�@�� �U�&c����1cX6www?~�y����3]�M!�������i��Ri��
ul�YYYHJJ���6�O�>6l�L�www(((`����������j�*�m���c���������K�Tg<�����[�7����1��}����������������+//������v��an���8q"���aaaX�~=���9r��������:VJJJ�8qb��E9r�5v���������h;���@������
F*++7Z�����MD"k�a|����455Y>eee,����@���r����sss��}:t`�4�����s'���k����G�f�%$$����%��mCbb"^�x��g8p ���0�|�9s�E�m��o�����d��dr>������g�OS����[��;�G����'D"������c���	���H8@����������r�c�$--
m��P/����=��^�����U��6PZZ����&�����~`4E�V������?���)((����pww��������^�GGGc��=x����[���"b�?4E��f=�Nc�r����cT���98�'9�;111X�x1N�<	??�������A���;�����6z�����7yWWW8::����pwwg�����[���a����0aB�������l
������EEE�K
�LyNN�|>LMM���������������3�@��
�jkkQ\\���n������Cdd$&N���2�*))a��1��w/BCCq��-������������RRR``` �2���.\����6\�M�65���������1^�z��UTT���A]]�V���1c�m�6�����r��y�^vi��U�����`^�j�`��U�������\yjj*����U�Vz��
%%%����I333��j���<���2dH����+���u"b��Y�p!������q��}�B�?555:t�\�;v������~����|�'<W�s��-`������1{�l���5*����g��e���d�t��`����#��7����Y�������C8p #.�������}���V�|�2"##[�����3�����~����}}}���������:�9������puu�S�w���~c2�@}�M(�u'����'��9��;vDOOO���w�-[�`���077���Y��?)))�3g|||�L���O��������������������.����K��y��=cl555���/�����$$$ ??��)..������Y///�<y�����A��b����W����D"a��'O� ==��>{�,��c���)�O�>\N���� $$���L����S�L�?��}�����k�����'�|��O�lcc���o7��?~��������.<|�0���t�bbb�}�v,[����OP?� ��c>
?����*�888�.��b>|���KNNFuu5������)S�x�b���a�����s'k��C�b������/���������+�J
�l������|||���l��={�du]�9.Dff&K������}��O�>���rrr�y�f�;����455��7�`�����������WWW>���X�z5�O����J888����HLLlq��_���~~~1b^�x�����=[���u2�>>>9r$�����?`��	pssCuu5������{���uss��m�0n�8����={���c��{7SBB`jj������[��������������d�������s��5���C,X�&L@ll,|}}1n�8hjjb���(//g�f��3�F�.]0j�((**b���L6+22������+�����8w�������_(++��c�0w�\��5QQQ�6m�R):t��!C� $$555X�v-BBBX�?������������������gaff���xV����Q^^��#G��o����;jkkq��Udffb���c������:�������Z�������7���PPP��6l�
���@8~~~GUUv���i���]�v(++��Q�0}�tFn��
^^^8p ����#�<
K���?�����!33"�666���3D"D"������������2��?������%�w�������!00��O���|||���������IIIPPP���S�`�Vw��������}{�?�%����0b���������0e�����Zxzz6������w����L"((?��3��S�N���'^�|���d���a��M�.���:t����-XWW����������G����|\\\X/���O?�,Z��/_FQQ�M��9s�4�F�P�Q�F�����cH$��;�g������w���i``<����L�yyy�}�6�n���7n�K�.�,���"��}}}dee!''�����u+�9������O!�`ii	ooo���B$���...PTTDXXttt���������7nd�����a���PQQ��'OP]]��� |��w�=�H���0TWW#++��������555<y�ZZZX�f
���===���������;�N��+W� ;;���X�ti�6��*��w���q��]�Z���I�`mm���R������3g������^fRPP`2�D���(((���7n���?�_"�����w�������6o����^�b��;vd�����h��
x<�
���BAA,`���q�����d��^255���9������%w\�L555�����d����o;2���[����������D����:���b��r�rpppp�������xg
q��	���Dn���n8���	������[�����!**
����|�2kZ��~�.`�\���������'9>V���I���b���~�c�����d����Dyy��l�888>]�.`��T*����QYY�9s������`���x��1444����!C���X�:{��Axx8������>***X�f
3-���������5k�b1v���[�nA*����c��������k�.���#**�o���xxx�]�v�E���!�f�@ �����tB
��������}�6JKKahh�n��!  ���O�z��E������PVV����.7�a\\^�x�Z������+l��������C�=0`��?]���'q��,^���%988��r�w�={�n��a����e�\yvv6�e����aaar�Lp�3(**��;w��W/�������V��������y�f�����(~��%����	&�������0���`��U�v�����������c����������Y��O�F�6m�p�BTVV���YYY9r$�����.))A��}��gO��}:::������acc������O�8��[���v=y�vvv�q�<<<������pL�:�O��������Z����
G ��HUUiii��_MS�N%[[[9�	&���	���2����z�������JJJ��h���������,,,�����{AA������������oQ����v�IEE���R]]]�������=���T[[���^���|6�L&����M,Snnn���}p��222"�LF����	%''3�999���h��M������M�F���$
�����������)))i�}������������b��������KD�����<��d�����H(>>�e������`RUU���b""z�����RPP�\�������������������D"]�z��/�i��y�8��CCC������[��Q�������M�65��R��|||�������|H"/D�m�����p@��
����S��d���I�p��a�B�����>��Y�A$������LYb�s�����LLL 
������6��4j�(t��@���	&@(������?~�;g�?���{CSSZZZh��._���?{���w�P(���9ttt0{�lfM���T��� !!�������@ ���X�\�\�z����D9rd�kZb���2d455����N�:1���d2���`������cV��J�X�p!tuuann�Pooo<y�@���8t�s�/^����}�f��	'''�1���/���5��sBBz������`uAAUU����m��]���aV)i���XX[[C__ZZZ����������aZ�n
###�B�����e2��/���.���!
������&M���:�������bm�g���
�000���-n���d�M�����'���>�X�B�;w����M�����=��]�7o���S��r�Jt���������+W����/nq\���8x�`������?�u�������LYRR�����������Bn�����|�,hq���K��C88���V�����9��`c,^����������sI$���wI&������������� �PH�.]""���������@""��c)((PNNs���2RSS�u�������������KDD���#ccc�<y2���2�R��:t�@������Ieee4a�������|��dggG���M2��HUU���������g���oO/^���::s��x<���������+�<==���[TWW�dE�=�d|������C7n���j��>�n�������(**���=�_��n��IDD�W�&%%%:v�I�R���#___���`2����dddDEEEDDH���$��������?�������M��l�����]�����?� �@@�V�b����CJJJ���#��}��7�f��?O|>���YCb��JKKi��dhh�dh��������;H*�R~~>����lmm���u�HSS���9C���t��iRVV�-[���e�H Pbb"=}��LLLh���L,������M�/���2z��9������S���� ��#G=x����ttt(<<�����LMM)  ������6��v�Z@D�����������OC�v��m���F���'�����#����$	�_S===���#"���ruuee����HCC��^�J.\��3�X�=J9>=������Rx��RRR���w3�V�Z��)SX~��9#�b1iii��X>������)77�JKKI �?����������(77����HQQ�bbbXu�[����6� �n����


h��e���F�.]"t���~aaadmmMD�]uh��,kkkF,]�z�����Y>����|��&�300 ///�m�������t#*++SPP��m���h[��=KD�pwww7n9r�444(55��wrr���0f��������GDD�������������k����������������������)D���,[AA��|�>���b�`"�����gccC�&Mb�l����;FDD�����NT�:::2����dffFR���<x�����mWc���;������wi���""��4�����0�b��������KT�-�{I�Bc�Q+��(��H�r����q\@c��8��:����s������{��LF7n� '''j��-�d2�J�����W�>>>���LDD�}�I$�Z�=z�PRR}80����I[[�lmm�^�z���F/^����r9u����
F'O�$�XL�����e2���QTT��{R��������};���%K� 44@�[-33��5��urr��~�����p���2d���P*�x��1<==��_?8p�d������!99r����^�|��x��)��y��/_������#G� ;;��:t(�utt��MLL0k�,�3-�U�Vj���~����2FFF())���]���P����i�������B��/_r�x���p�D���T6L�8���.���w�]�v8|�0V�X{{{�|DD����n�:".....h��-�J�o��-�Y�����0a���


���������g��I�Z�S��;w�����D���.=z�}�6ye�B��=B������cL�4�g#,,���������={6���QXX���o��
j��
���q��E@ZZ��y����4???���BSS���JPJ��++�T�����$|����<y2|||����%K�`��	8|�0�B!444�s�N�m��O����������X�d	JJJ����������q;���:�3�g�����������Su�i�=�{�)Fm0��h��5�V���={0p�@.]*�����~��b�o�JP���"""�p������^�|	$&&"..�g����g��������{7n�������������S�y�Z�
===�d2���qi������J]���D"�����U�OW�^=����2TTTT�f��VVVh��RSS��uk^�a��!**
���Gxx88����b���x��%~�������C���1�|����N��T*�T*y�BBB8��T*�P(��|��g���Fii)����
QQQX�j"""��C������;�\���@���:��]�b�q��<[���077W���6O�<�����A*�"33���5�Pg�����.�����'''L�4	�z����K�<4k���C`` ���x�������;�?>����}�edd��D�������7�7((-����(���~;����F�bG2�Z�d|f�����X$%%�}���<ccchiiqTT��H>p�@����XO�n�P�~}<x:::000�&�������O�>ujw||�Z���%����&P9�����K���@�����������	�����������Y���b1���������L��gTT���1y�d�17o��������{����	���		P)����0a��VJJ
lmmy�;����I+u��������iS�e������UE__�;gR����X�j���|���\��c����}�U�vu���boo�M���^�z��~@^^LLL��/]����t����+���w��/�P+OD8p���k�MzO�>����k�������RRR0t�P^����B!':����3f���Q�p��-���s(++�}0�puu��Q��}��Z����?�;���y���$�%P��_
����ILL��%Kp��	5�T����������)
:t����pvv��}�x.��/b���\���&���������:t(��fee�F��f�1��-[V��qss�\�*�]�SSS�={��������_�T���cpww�s=����uT��5j��5��777=z
��K;r�444��������Xl��
.�Grr2/^��� �7������d���3W�y����>omEEn���s�OOO$$$ ''�K+((�������s���#Gx�����c���\����8t�w�
�����g�A�P@.�s@��=v���,�K�.!??��OLL���oX��
�R���`���'O�`���pvv��!CT!��_��~��W^.�#::��_�����www\�~��M��y���t���+W�@�Tr����k����m���7�@WW��>u�TP��un;�< ''�����`����A�p���[�u�2331x�`@`` BBB0s�LH$,_�\���>���u������`���?��,,,p��mN��^�=z�@�����gOddd 66aaa��K�������R���Rhhh`��u8p ����g��!66�����97j�����5kRRR��Al��
��w���?�B!f����3g���W�����#G���+��Wccc???0<��m��b��Z�x-X��:uB��]��<}�Z( IDAT�������3amm���|���!**�?�u�Vx{{�O�>����J�������#11�����///^���S�w�^xyya��������jL��;w��7�<x���R����Fdd$f�����xxzzb���������;����=���CBB<==���
l����5�D��E���K���������HKK�������___,X�o��Aff&~��G|����5k�.]�q��A�P�����wGpp0����b�
L�4	������������:t(�4i�������iii8{�,��i���xhiiq�l����G@@�����uk������FNN�n��s������S�L��JK�.E�~���O���"//������z�����b�9�'O�>��n�
///����W��`TEc���c0�A^^^�z�D�o�������GAA���[�6m`ee��-[�C�x��
1z�h���BOO���������5�B}��!tuu���_"22�7~���Dooo����ys!##)))022����97%P9������2___���"55R�����%K���������^�x�G��]�v��y37��� ����s�UTT������P*�������/�[yy9���y�/��l�2:�G���PVV����3��s433��a�PVV�������7ow�/���D���gs1mll`ee�W�^qc� 55YYYX�r%wo.^��n���e��\9�X���p���!55��������-[��A�c*�={�D��M�����{~�������X�Q�FAGG���Cii)����U�VqcI


1r�H(
��{
�#G�����i��!������i�[�l��[�~�@Dx��!����r�J����^�z���E�N��T*���. ;;&L��3j�J����j�CM4n����C�F�PXX�T
[[[DGGc��Ejc����d2���B,#((�6mB��y�e2�����yU�v�������m*�������>���������C ��_�������	V+++��� ==^^^�^+�X??��������hu3�[������AL�0s���(�K�R�h�_|��N��Q��`0>l ���K�����#G��gOa����I����0�`��i��M�gs~H�<y��Y�f8w�\��T����\���`0����`0��	@���r�Xb5D���TTT���~,T�[5�^]x���;���
��o�(++��F��{Q�f��15����Q3,�?�L����CKK�������\l���������v���x]�7o��i�xA��jJKKall��;wb��[��_�>>>������.����m�6<~�������|||���^�B�n�p��]DEE���Bm���Z������cG����c�����W�\�:u
g��Ann.Z�h�1c�p!X����.D�N���g����g���^���%���[��{W�^��c����


4k�T[#ZE~~>8��7oB*��A�\LBU8(,,��E�0h��w��QW������x<~�D[[[������;v�
BNNLMM?H{�WHNN��C����;;;>\my�.����������F��[i��)��C�����N<q����1{�l��F���(w�����;����k�.�����h���=,,,�������j�~b���#G0i��f/!!��|��5�h�'O�����������������������?X;~��79r;v��VS�2e
��������������o3m�4,Z��;�&>�M�6!55���x��1����=z�����TZii)�������������:::��g1k�,���#G�������CYY�������!C����O�>��c���j���J�S�NE�V�p��A�������O�������>����Y�`|||���+++�>}NNNHNN�������/`ii�c�����A��$$$�C�j�����:t(/^���
��x������M�x�b1bu��A���A��Y�f���.m������A%%%�����T\\LDDk��%MM�j���������n�R����*//�������sg�m�����%77�
E�u���m����]"���'�����=����TXXXk[���ym�����Q�N�������q���D"�����+�����vQQQ�mVQRRB999DD�����x�"%&&z����{����[��

����bbb�l�8q�LMM�E�4c��Z�A����m��j���������)S��1c��X,��'O��*�JZ�t)�5k�p�7n� mmm
V�>������#w���m����6������P(�-[���}������I���Z^QQ�?�@��TW***H*��w>Q������EAAikk��e��4�\N�����wo"��?���A�~�-W����lllh���<{���4w�\^�B��.]�����{]�����?Z�����?����q��5�Zx���~�:/���c�RSS��(33�:u�DHGG����i��y<���?~<�H$"1b�~�������h���j��������GDD���4l�0���#�HD���4i�$��� ��	���4�H$t��j��- �HD�V��_~��lmm	��C������i������CB�����(44�'N�8A����
������HZZZ���M���\�BDD��������6	�Bj��]�t��9w�\�����@ ����s�BSS��7/^����w����$�������I$�P($�D��7n�H$
	�n��kQ��6<<�455IGG��6mJ;v�����kI__��f~~~���C'N�PQ...4r�H^Z^^5l��v��M����-���u�FDD���J���_�h�o��T�~}�C"((�,--k2����	�����$:p�@��\.'�TJ��������L�<����)33��*��^�z�@  rss�+Vpf��)��U+R*�<;�|�
5i��***�������J������|�;��r9EGG���	�D"�������Snn.
:�|}}��e��I��}{""Z�r%I$���m�R)�?^��h��������b:u�����������w���������%''���\���6mJ'O���p��y�n�}a�NM�:&M�D���\/Yxx85l�����G
��N�<I���<8n�8255��W�Qe/R�/��+W�H$���"����L����m��Q`` 5n�����KD�/(�W�.0==�P��m����TVVFaaa$��[�n���B�_����j��1w����~��t��=""�y�&��9s���b1��3�JKK������[GB���>}JR��:u�D������O2���_�N���4{�l*//���
'CCC���"��^;+++���;%%%��W������CdjjZcO%Qe����+U�H�O�N\o��k�H ��+H.�Saa!����lll8��w�^t��aR*�t��=j��5'KKK�����n��Q��<==�D"m����>t�P


""���C���gO""��i������g���/�\.'}}}
��������!C��np��q��]����r�����ZOeU���	������f��M�X�p�YZZr�������;����Y3�4i�����1cH*��\.�C��������u��7o�����3g�����'��eK
 "�#G������
211�o����*?�
�_�>~��LMM�z��RXXH�������si�N�"��d\����I__�.]�D���g��x���SW��?���&=z��*�'zzz4s�L^�!C�p�������MK�,��Y�~=ikk����)##�455i���\��5kHOO����)--�����gc��9djjJDu�/_�$�������3��+��m�H p��?���n������_?���#�����������������������I�&�^����D�v�Z����WU�7�"��n�:�����&��d���?��JW��/^Q@@yzz�lo���ZpJJJ�m�J����A\o-Q��T�~}N���,,,�e��������"�@P���r�
���8���!�h��Z�W���so�����j,����k_����z�L��0�[����P.�����~��7���$�h���<����T*���P(���T�-O����$������wOO�:E����_\�J��O^^^diiI�-�qFyy9����x���S�r�""�LFnnnED� ����Y��k���_|�M�6�o�����<������l��%���T�������W��y��***���GGGt��@hh(��I@�z����?��p��Y��{��q��}���"??"���������5j�w��������h��)/��PZZ
888���s���A^^
q��-4l�@�L�~��a����?�u���={�Q�F5^�;w�bbbx�zzzHII���j� 11Q�8�2����s�l�2������5�_�~}���`��UHKKCQQ^�x�2T�������w���S�6�&==}�����&N�>
---@VV"##�a������P9����������e�0l�0��k��T*B�Ba�<�w��Q*��������Rw��*�J�����a@��DDD **
������!��������m�������?=z�@@@�;����5�����������y����|���5j���0l��,��������{wXZZ������b�eee��s':u���|���������DBB�������DFFr�K�,AII	������m���#++���U������{�n��������A�����S6��`�_�R������g�9r\^qq1@WW�wLU1�zy����b���#""�{����",,���(,,����1�|����R�
sssDDD@&��	�-[����'�~pp0��J�T��4�F�B||<"##���}}}dff���:t�����1g�DFF",,7n��y���/q��h�����*�<y�'O��k���t�L����#))	g���U���?��~������z����)����.pe�R)�b1����k���[�������k�.��x��qpqq���3�,++Caa!RSSaccS�5�f�����������`kk�+ogg"����9��6�g�I�&022�D"�	��!"����G�u:����y3������>%%�Fq]�������d��/��
CTT������p8p3f���W�Z��}�"..+V����S��_?����������]U�A�000�3,������[7=z[�n������AOD=z4
�/^p���'���%.]�����t��w�����s�����={���K�kN�=D�������L��gW&��������{��\����dii�6���������;^��i�8pff&�������W�����.\H666��[�n�p�B����.����si*W�����4�������[u�(���?���W[�B���{����'"up```����2g������}��w��ys^�\.�A�Q���������z���kWj����w��}""ruuUs5=z��.��w����)M�:��q�"���,S����Q�9�6�*/_�$--�jg�����7Y[[s��a�����q���UC~��"�pcKKK����P���'���>7�W___m�����\��F����;Srr2�D"���������D��5��d2���T�MTG�=h���t��A�������cZ�n��+���������dccC�������m���p�+W����5jT���!a.`�?�_����}�v���Oh���Z���>�5k���.���G��������I����w��?���r����0`��6|�p���y��066V�q��|��w�\kB��������x�"�u~��e�7�s9
�B4"�EEE*{GJJJ8���HNNFZZ�VRR�)S����������a��={��������y�������s����a�����������$���s�8p�V�r�!!!���+�{X���RP��answw��3@D077��9����"##�v�Z�����GDX�lN�8�o����9�5kJKK��3>Dxx8���T�vl����_�q����H$�W_}���x,_�\-6�����7��/�������			\Y�R�������d,^�AAA�H$���1�s�@��]aee����w�kjj�C���������W��5�����'�i�&���B[[��S���z�&//'NT��g���D"���=��c����3S�NU{�T1(srr�}��j��`0�a.� �O����]�v
���<x0 $$�������D"�6p������cG|���7nBBB��������E���X�n��wo����������ETT�fhh(������R���q�"�k�����#QXX<z�[�l��5k��K���A�V����4<{�/^��i��r�J�Y�AAA8z�(�����q��I����W�^*�w���Ctt4�	&`��}���3�������={�@&��8.I.����s���r��g����1�O��v��9s���{����3]]]>|c����+W�j�*|����8q"v��___��������y5.9���o�~������a��X�zu�.��d��%�J�6l�/_�:����.]��g��l�2n�)P9�����

���z��###<}�g��E���q��A5����K�>L���q���:�u��)�����_~��[��c���~�:n���i��a���\��?�AAA������p��Yn���b�����M&&&r�����z�*���0x�`������������C��+W����?�����Odgg#66����q�}�����!�b�
�yo��111(**���U���"""���O�����_���@�3g��������#�6b���o��3g�n��:��Qw4���7�c7���������lH$�l�����D"��mmm���'''.�����sss����u��x���R)&L����@p�s���c�������G�```�9s��&f���-�����wo���Z�n���{����rgK�.�&���u>>>����Q}�����^����������)
������0`��y���4n��W����7���PPP��}�b����dx����������7rc���� ��PRRwwwXYYa��000��{�P\\�^�za��u��P.�����:uP9q&55�'O���������������!��aee��������
<x"BLL�������E�pvvF@@rrr���+x{{c��� "t�����P*�������tuuQ\\333XXX���a��5�M������������d24o����5�Q����~���o�� "���BCC��ao,�
{{{�;���(((�T*������+|��7���*��ysXXX�6+++x{{���*���0d��D"���B�P�S�NX�v-BBBx_Z�h�x����6l���Hooo�x���Tdeea����
---�7.%%�^�B��-;;�w��=�!!!(((���������c�����jhh���[�����X���677�����k���777H�Rdgg�U�VX�z5w��R)

amm�������E�x��&M��6V�*J�b����,����`�� �����Q�J�h�����L�:���!33��5�z��&����\�r��/G~~>n�����]����������oa``����j��2�6�W�`0�#<y�,@�f�p����]��a���-�pBIDAT-[��;����L�1�RX ��`0�'��c0����`����T����wP^^��������rnU��BDj���[JKK��Q\\\cl;���7�\��iii���E�=�p)U�r�
:���X[[c�����-�N�:���Vm����m��a��1(//��!-�.�L����.H��s�p��)dgg���aaa��������c����J��q�\\\�����gc����<@��=���S,_���9��y��[�����������;BBBj{r��%?~'N���*
��f���ikk�q�����T['��s��a$''#;;���h��=�[��*���;������S������n��W���������7�|�����6����g��pqq���q��5��������)))�����3g������/����������O?}0{			�Js��E������KKK������<��/Y�:t@AA��i����}�vH�R���#22pqq���Kall�B���HUH���C��������Yc=
��/��S�������T��u.����Z��?�����3f^�zsssa��h��	�
�
�\���H�i����������c���m��7no��+W�`����4����w�7�`�U���R������K������k����_pir�����(((�WV.�Svv6�&i��)44T����
��������RVV�[�U���[	@��]���jm����q]\U���-KDT\\L2�L-�A���o���G����������~��9��O�Rzz:	��+))!?~<�fooO��OW��T*)''�[���_�~MD��+��g��];n���d``�[�x��}�n���f{������E���k5����������������/��BDDdeeEm����/_�������W��<y����5�444h���ju�����B!��9�KS�C��������[��P(���EEEE��������[k���|���?�����M`0�k�d���v�'v��%%%t��i��������������)277'MMM����y���	��w�R�N�H[[�ttt����6o�LDD���$�H��~�����E&&&�s�N""�|�2������6iii���
<x�+_�r�Jrqq���8�H$�5jDw���o����b1P�/_�v��
�T�~}�����
EEE��� �@@]�t���*,,������V�ZQ���>}:I$�D���E�����n�V�h��e���N�6m����6mJD������j��������G�_�&�DBB��D"I$�}�6�~���N"���B!iiiQpp0s6?~L�����b1�����M��	@���!


�H$�~�z���?�����<z������y��&�|��{@�J
�V�^MD��L(���������Mzzz4v�X����Rddd��GDD�X,���<���`YY8p�����KDD�a�rtt��~������������y�V�ZEfff���MB��Z�nM/^��###i��!�a�244$$�Hh���\�BA�g�&SSS�D���A}���������f�}�����	@����`u������
��TZZJ�������SII	�y��"##I__���_�&+++������u�V���?�������W��u�H$Q~~>edd������Rqq1UTT���ICC�n��ADu���-#}}}6l�����/�����4iB'N$�TJ���'���!""�LF���������;U�p��b���_H�TRVV����<<<H�Trh��]�P���&������R$�h����jooOM�6�Y�f��+W������\�X���####Z�t))�J���������)??��r9��1�$	��s������C��������5m�����H.���={�^�z�,**����S��m)??�F1�~�z�������egg��������>}��`jj*�>|||8�]�{�&[[["":z�(��g��X���3�>LD��yyy��]�7�oh��u$��g�����JeeeEB��RSS������6n�H2���R)
2�

�k��g�����9�rrr�������C&&&\���E�H$qB���g���L>>>�^��&�D�d�k�����L����{I��T�""�TJzzz���k������u���@DDk��%�X��}��������%KHKK�sCU���7oN&L ��	����s�S���$�y�����s��������y=�o��!�@��b.[�����x����_SII	U�f�\�2��

y.E"��{��P(�zd�5kF...�2
��LLL8!�6�����kWj�������i������������c�B^^^DT�"
��f�^�.]����]]]�m���WIOO��-[�K0`
2���O	���S@@���p�����k�ED4m�4��� �BAk��%��������Z�j}8����	��y�wN����DD��u'"JOO'�3A��������	������������'	������x_�d�a���R�����{�.`bb�r14k��+����[����;��������l�����������		AZZ~���8q����������lRRR��������xn�Q�F��������^KKK��b^�*����Z�l���x��sEEE\�������K��E0~~~���tuu�����=CQQ������h.=;;J��?�������q��������f7''���(**BBB����y��HJJ���3������\�~������t(�J����-[���'5����c��I�&a���\����K�.��?�������� &&������c��q�L(�sr�R��@ �@ �V���U���bu�@��-[r��{�z��<y�.]���������!�>}��5n�zzzjvJJJ�����/_"%%����B	=z�����������u;�}N�#�L�Z����hv�v��m��G���`��d|�|����|�26l��I�Rhhh��]�D��J�j1���o���LMM����#$$qqq077G�n�8B�P����75j�����l,^������3'��� �<�@�����YYYpuuE�z�0j�(���BGG������������s'N�8���7CWW7n��������s���BDD�%^�^=��			���w�T<|�������'add�VoU�����wc���pssC�z�������|�'��U�mm�Z�
���X�z5����gdd`��I����QTT���"����|�,,,`aaQ���'b��Q����
LMMye7n\��AU�<y;;;��U
���4i����j����r;v��|KKKxzz�|����j,[\\\������R)�_��w�gN&����""" �Hj<V������,W��R�;��to��`0>$L2��dgg�����7��}������

y/���888����P(��w�ACC���BCC1|�pc��}

���Y[[CGG�6m�S���K�.�����/_"##
4���B�P��I$L�:S�NEQQ"##���@5��5�@ ���^k�U�(����������C����i��w�������-[�x�b|��\��]�8�:���<���^���6�Y��g���'������r�
���1i�$L�4��ooo�-U���D�v�j��w���4i�]�������ggg���s?~<�c��044��={��G�jm�������;w��n%%%�������������FFF/M�P ;;���]����X,F��=y�������;���5���k�%�}P��_
����	8p�Zw��=Y5�]rr2O$xzz���?����o������~�>} ��~�z��}#G���x������4�R��3g��1	���!�8�7,]�B�eee*{��l���"((eee����D��7oT��9;;c��}��3.^���s�����T*������D�#F����7o�SPb�L"���%�v��-�={����/���055EBBW���?��c��o������#..NM��"�*�Ps���y���Z�_]=z4�4i���P���]�		���.�����Gtt4���{�[�N-^��;�}�v|��W5�~ccc\�~��m��eu>www$&&�z�N�<	�\ww�:��������y��7n &&���un����z�hf������v�����
�2tZZ���������P��������`��q�s��J%�?N��o�c������q���������HJJ���E"
����NNN\^������@�7
6��#Gp��]������={6���������$4i��Z�B||<\\\P�~}�=W�^�������u�V���@__����H$X�z5���|��7X�z5z�������g�����@ll,���jg����H777.����HNNF�.]�+ohh���7��177G�v���W_!33/^�����1c�,\��W�����1q�D|������h�������}�Z����=b�;w����;yy��%���������'''t��
VVV���Abb"

q��i�p���(���a��)��q#���@D�z�*���DEE��1����������7�L����'&&G����7����|�Y������W��,_�������A`` ����y�f����V�1����1o��y������C�BH$xzz������;w�B����lmm������?���������+V�����6m�-���W/�n�)))x��9��i�M�6�E��65n�"������V��
���<x���Lxxx 66��P(`ll��\.���o�\.���5<<<�4�L���������������g������}�b���pwwGqq1�����]�"--
)))���@tt4�B!���QXX---�������������������/�����\�ayy9:t��M��q�x��J��H$���R�Gfff��Re�i��0`������#4j��V����D"���8�`gg���T�d2DGG�K�.066�&%�d24i�:t�Y�f�>3������S����������g�������IF5abb��c��e����#SSS�7������5��@ @�n�0x�`hii!77J�>>>X�v-�s���r4h������E�����
�x
������R"�B����4h}}}�9B�w��EYY�M����h�M��P��@�TBWW~~~������9BCCQ\\�@KKS�L��3�����z�~�������`0�����2��`|b0�`0���� ��`0�'���`0�L2��`|b0�`0���� ��`0�'���`0�L2��`|b0�`0���� ��`0�'���`0��Af���x�UIEND�B`�
zipf.pngimage/png; name=zipf.pngDownload
�PNG


IHDR������sBIT|d�	pHYs���+tEXtSoftwarewww.inkscape.org��< IDATx���w|������f�M�4�H��Ho�4)F�K��X�^��������_�cAQ���H)���hh!�n�Mv7�{~�d�B"	IH��g?�9s���������}��B
�B�P(�
>���B�P(
��E	@�B�P(��%
�B�P(.3�T(
�B���PP�P(
��2C	@�B�P(��%
�B�P(.3�T(
�B���PP�P(
��2C	@�B�P(��%
�B�P(.3�T(
�B���PP�P(
��2C	@�B�P(��%
�B�P(.3�T(
�B���PP�P(
��2C	@�B�P(��%
�B�P(.3�T(u��{����X�n]mw��������~����������no���4n��:�V����/�t:>\�]Q(�J*�����o�����Qm���1b�m�Y�f��?�+�����}�=�\���[o�������_~��!Cj�G�C�v��?>QQQ�g��<���5�+���QP���|��W����v7��M�6����j�����;����=naa!�}����5z��Lff&_~�%���Z����i��U-��zh��!c��%00����\�������BQ�(�P�C:w������K/�Dll,n���~�|������?�����J����X,��_?����y����;w��g���L&�5k�{�����������#::���@bccy��gp��Z���g��C�f3�5�c��y���/��]�v�L&bbbx��WB��g�1~�xl6�������>|���X�l��u��m�b0f��9rD�~�������3w�\bcc1�Lt�������y���O\\N��#F����=��������=�7�x#�:u���>���",,����#HKK���l6~�abcc1��n��w�y���;��������>������X6m�D�~����d�6m�D�������}��l��M��[�n�������n8��7o�C�!�����<y2qqq|���|��gt���
6��cG|}}���a��Y����p��O��uk�F#-[����^BQ�>iiiL�4�&M�H�6mx�����)�3����I�&��[o������a��������5��Gy�������������k���'�`��l�����X�.]�����g��U�V��Y3���^����=���#
E�c������_<���"11Qx<1a�q��W����N|������c"55U��fq��w������$n��&%


�B|��'���W4H�^�Z��������z�^$%%	!�����E\\���y�8q��X�d�h���x����B�^�Z�t:�������T�c���}{����K�,z�^���[���Cb�����O����"--M<��#�l6���Dq��	�g���k�
!�X�`�����*N�:%���'���Z��E�\��+Z�l)n��f�w�^�g���kW��M�2�cAA����� ���#8 >��a4EZZ�V�n���"^y�!�
6M�4�<��8r������EDD�;v����Q�Dtt�X�f�HOO���&�I��9S���SO�{�����y����^{����_��#GD���E��M��A��������{E��m��u�
������z���D��m�u��$�z�-���+��}�����!�0���w�B������ ���#��Y#8 &O�,|||�����BL�6MDEEi��y��"22R,_�\ddd����N��_~���5j�������}�8q��X�`�s��B��gH!|�A��M��O�`����,V�\)q��!!����7�G����[�l�;w-Z�n�[���b��!�S�N"11Qdee�w�yG�����+W�'N�����+��R�}�����BQ�QP���hL!n��6a4Ezz�Vv��i��_�������~���u�B�O?�Tb���Z�������N!�����1c��������C��9s����m/
�C!���_/�����x����{��'���W_AAA���`���E�>}��OLL���B�7N������|���~(t:���le^���7@$&&
!��Z�"  @���Z��
�� RSS�BDFF�N�:y�3c����+�V�8|���W_}�Ug��)������m��|����%K�x;<<\[��B����@9rD�^Z~��^�Y!
E``�x��'�����SB��{�]!��:u��H<��m��|������"33Sq���#�<"t:�HNN���-G�!�f�������X�Bb���B!n��V���< ��k'<�Vv��	���R��(u�E68*��U�V^>s
4`������8p���|222�Z�^�v��E[���n�����nn��Fn��&���zbbb���[�&$$��?����
��o.������/;v���;��:����_�s;t�����*���>>>^��m��!  @[GA^^f����	f��q��5K�������i���V�[�n����tr��qv����9!!A�c��8|�0.������v��i�!!!�m����bGv��][����M�6$%%�So�����nRRR��?++���Trrr��(����y��gHMMe��a<�+��B�^��P�F�h���_�G���	

��;v�@RR���;���1cHHH�W�^�|��4��=Q(�#�P����X,^�III���1m�4���

�z�����_[��t^����.��YCDD��M#&&��r��q@�6o��O>���Mhh(���^m���VH��GVV�9m�f3���e���2e������-[(((`��E�y��^u�N)�yyy8�l����+����d��Q�l�J��d2U���	�Z7��e
O��	H!X���V+c���
6)����/-Z��`���;����9���,�b����9����<���n�J���y��wi��=;vd���=�BQWQ@���l����/�v���e���\�n���z�����/}��E�o����	x����?>S�N�[�n�^�Z������_|��Ezz��[��M��,


����i���nY����:0w�\z��IPPC�����������HaXPP���UWUk�*�N�������������N%''�����S�Y�f<��3����R�:t(C���v�|�r&N�����y��w*�:�;P�y���{W�������`����y��L�4�]�v��x
E]DY�z�N��,M����C�-��&3g��Xi����d��i8p@;n�>}2d'N��p��W^�=|�N'�~���q������K)**��9r�����(**:G����sgV�X�y�l�2m��R��������Y�`�g�f��I���Z����k���a��DGG��}{|}}Y�p��>?��?���������k�Y��������_�58�_~�E[���`��}^�%���t�����/��?����V+����HMM@��3d�z������>Ce����q@�w��Q�Z����Z�b��[o�>�
E}DY�zJ�&M�?>��[n���:�����^"**�����:`4�������=�q|}}��u+������"**���$�����/�������S�N!�={6#F�`��|����u�]L�:�o���!C�0f�v����E�4A��iS�v;�<��{�&::��%��1c�0~�x���x�������|�*K��������7�����P��	�:u*��-��7�<g?�����c7n���>��'��h4��O?��������S�N�����^{��^{Mkc����:u�+�Ku0h� >��������`����n��K�������+�0x�`&L�pN��@BBYYYDDD������zN]����3x���p�\��������+���O�����`V�\��%K���{���`��=,]��9s���3TQbbb=z4��?6��g�}�������8�z|������[����c��1c�����KZZo��F��;��>��>}�����B��<����������OQQM�6�w��Z���;����9r�������&::����E�f�069( �����8�f��1j�(�z=k��e���8�~�i&M�H���"~��WN�>�O<�m����``��=t�������Q�F�������	��w�e����t�w8������(��oO~~>��������aCF�MJJ
+W�$%%�)S�0c�������h���={j��n�Rl��7DTT��RbM���c��M4h���S�z�����3i�$����W_}���}�����S��������������a����1����k�deeC��}�|�K�h�����	������:���L&����d�~���?~���=z��HMM����l����BBB��oz�������u�]G��-ILL���~���g��E�X��-Z��G��E@�>���0x�`@�w�������~�
����O?��)S�<W�N��q�(,,d��5Z��W^y�a��2���>CV����H������t:)**b��!��W_a�X�����_~Ibb"C���w���g���IOOg���t���{�����H��_��u����b�������D���y>
E]G'�_d�T(�����O���~���c��5j������i���R�..����x�����O�9����%Kj�+
E��|
���N�b����t�Mt�����G�v�
���QP�P(J��?0~�xZ�h����
���b�TK�����d*7}P}#  �R�+�*jX�P(
��2CY
�B�P(.3T�BQ�)���M�����%�N�4��)+Pz2� h�?��B�P�o��B��������p�w��g���`����=J��1�(P2��+�
 ��+^V(�K%
E�����V��%��Y�-��1��|U��O�n�p(�#�_ 
)

��RE
+����S���[^e��R�)|���Vf2�x��s^��&�.�=��d��9�=��k��=�a@C
]����]��B���j�du� )��s���d_W���[����~�uq��x)A��H�0p%R��*
�����*�G�l8��g�=����q ���v�x����x{a-OZ��+���/��iO:Dv *PN2_PT�c���>Oq�z���F�#���-��L� �e����?|t�n��G7���l���������22��5w*
�BQ'PPQ%r�9�
k���A����hsNy����<�}�����G����X���M�b�`�[�?v>O�|��NBQ�YVjy�)�~/U>����DK���qRH&����������������Q(��G��QT	�����]�0����>���V���F�L�Oz1{�lm���=�s�=G�)����Z����5Ll?�hK4
�
.��(�,�!��3Q���V@���"#�mHk@���1�o�����(�'�0��+��G���B�P��PQ%:6���}�fD��+}>�C��N�j����M����������tm��#YG00��@O&����^�������D�3l~!������3�+��L����C����#�x�j3�B
��H?����5{:
�BQ��(`E�����>�=�;O�*���Dn���'W>I�f�0��<����i��18f0n���;��s��^��������g�Uc/��(�X� S��D�z��$r��m����(������O����E�J �O���O@�P(����3gr �o����ae�����]���lN�_�<��l8��������U�=n>��!��7�����DD�#�
2�l|����(��47�Z6r&*X�P(.�PQ%�.;�.}���{��cr��Ll?�<g-y���dt:�x��W���w�����f��������-o\��V����j~�%,��F��j��
�B��DQP�P(
��2C
+
��Fx`������^�O�5���/OZ���5�8�E���������%����c�^����407�d�I^X��s�s[��s��>#���A	@�B�P������w���##����L�\�$&�I+��h
��?s�<S� �C���n�wZY�_(B�9�����W0��	�k�O��R(*��
���`���t�_]��vW�:bBc�����3������m����O�������heV�����s�Ywt�����GxXw���9��E%�V(�"
�R��PT�]:�
�����)h�#�(������5_���F�Y�	��}���os������s����p(��s���ag���>\G���c�a�7��3����[�/��)�*���8qbmw��q�d�2hbiR�]����8E��{k�+�J���Bx^�z<�����mi��J���jm�R�-��t����	>�����p���-�l�8�������x��_�;���D���BQH�;�(C���5D�)����������ciflVf��D||<&L��c(X��t:������.���G�S��J]#���)�S\������t:�F�W���@�������\.�������Un4q��^e��i����X��h��\��TT���T���

��v��4�??�(��Z�=��-!���������mw���F�1���Mo��1"t�B�B,.
�
���9�{�h{t��d3:����<CQ�(�}�1x����sU��v��o���a�Zi��imw�V9z�(��uc���5z%k�NGu_�C���[7�����)����=����_[��9~�8+V��_�~ZY��-IJJb��u:����,Y��?�	���$���@�n���l|���8�N:w�L��rN���y�f���9|�0iii�5
����y�����S�WAQB||<]�v��_<��`����)9�h�`t:�y<��n��
QQ�R�?D����i��gg80��}�y��$$$_��H�N"~N<�o_M�w�p=��dJ�p*�����/�~!�7�G��v��/O�x�b�~,;��w7��������3��!�q�to��Nv��[~���f`��Ytj�������{�3g�$!!�m���vWj������TA �???L&��+�c�v;aaa���

e���������������n�������Y��n����B������7z��}�v���K��
�4h�����F�����'�E��7BN��:������W����qEN��jE�D?NG??�>>��zbCB�FN	��P��Y�B�^�,���%/rZ��p4ou������D)�vm��o0e��n'���y���|��f�����l
��U�=���IP�Yt7�����8j=���Ga��G���E��5O��4��|���D��?��?���_QqA���V����|�������eX3X�u�����\z.�����-,0��������n'--��s�b�������u������Y�o%�|���e�����8y���N```��[�VBBB8����[7~���	<[��]?b�H�;uW~�{�����>O�vW*D```��Kes�4���~���B0����C�	� ,�W!��.�0����.�8������s��������[�0F��`D��r����j9�
��;��z�x��{<|Ny�F���o�������u~�}TT����l>EE�PwU������f�Y|�����p:�o�}>�����S�ifiF�%���Mi�������f5jD��}�����p�������X�V\.����c4i����,���I�������p�1Q����$Y����|�BZ���Uu�z�{�#�W��sI�����R�m������h��jr����U�G��qf�l�_�"o�]��Ha�
9��Wd"E�B�P�J*�����b�~������e�d�I���9�w���T~?��\�K�p�a�n'��C�����
�B�WGw�M� IDAT�������t���5�=���X�l��5c����:u
����B��x�����0�	��Gq��w�|Q(2����
�2p�0v�e���$�L$ax���(�<�`��&�Y�B�7P���$��Yy)�D��.@?��E��B�P�J�C�N'����qA����+'9,@FA�_����	oNJv
�Nmb�m-�m~���������";}�A����C��d���%A>A�?�}��"����8�k��5�N@���4�.�K�N9u��������c�p�&���,OZN�(�:���04�7���I��@�q���b��r3r���J�{t6 ��J�6w%��UO'Ks�nu4.�	�10����psq�F�� ��K^����@~�~��e[9�9�����
H�F�}3�����N@�����Q��/a/ L&��*��]�^U���w����d�����\�}z7����v7.���8�Nm��R$���}of�����M�*�b���4�k�����m�6Nf�d��i��6s{��((*��7�_��Q���a�p���K'��C��I��3h��%z����G�e�
�u��p�����#�����[��q���<.>,&�"����$��60�.��a��w`$���V��
d���,����g{-�?��_���q�.�.;�a����}�����U�	�s��T��Q�) �0�������}9�z6>�T����k����4iT9�� ``���,"�	]�I,1G Lt��o��P@F%[8�
�(�=�3�)��������8�OD��/���!��Fs|t�c��s:����XA�������\w�6!����������yu��Ul���IzA��+V������5������'#$�Z�T������FC]�J�c06l�9�#���+G07u+�,���_`'�-�
�22����T��5�����5����q7��t��9m;MZ~��4������l��3� }�)<���A��DF��N�#�/����I�-�~�|g>y�<tB�N�;G��;�	�%�?�P�P����(nmw+A~AXLv���H��6l���{������	�=���M���i:5�����R���`z�����|�L�J���_��93���A|���_���2���u�p-�L8�����nNK�M<��������N$NI�qP�n����Y�f1y�dt:+�W��y_Lzy�_}��A����7�j�i��U`cY�I�^E^�{���*����1w��j�5EzA:�����```�9�_5���������^���0C����T��"d�n	9Hq�0����&`R$�.�8��5���F�uz6�Q`�������������t�?����|��;����v�V�����������Qo�c����r�����b�T��	[�3�i��Beb��Ml>��/v���}�m�V���t���+����$::��Q��;
������| �W���#�����������s���	��wpKm�t:���/��u+3g�D��q���i�r������+����uB|�xh���\��v��Bd���H_�[%_.���^E��`=�b�������bur(Im�A�ts"���{�Q���h���-*\��LCr+���k!-y3�%�4��o}�Q2�:qk�'�_�A :����8�����av��m���|b3K/a������x	�Mza�p�����,�dd�nKg����9����\����5��������pm�<�/:l@2�*���5K���cddz���L_3]+k��F�{����GV��Y?���pl��`M���j4�u>�B[:�]u���%�0����
��,�$�0���}���S�<.\E�|���o<���Ag��c�C�;���*&�#��ACs���j��e��?���a��<y2		g��1/��2�V��E��_eQ<�4�hl�~4� ��G+���g�b�N�n5.���$����a%��C�=����a�j�j�����%�� ���o����|��t���2"n�62a��l
�b�`��`1��Y��ZN�"O��|r�b+�a�[)tRPT���{X�������$�����b��3��u��nT�*�@	@E��2�r��7:��7���j>N�_��ZUz?p}�k'��8�G5��B�3���Y/z5������f���l:��w6����1��9�3�:r(�~F?~<�#��Rl�:r�s�IqW,��m�dd���h`n@d`$��5��a���������msU���)�&�/��`�
2�N�=�o|��:���B�.9�����i�im���0G�1���^sK�J����$r�9��7{��}d{6$��O�#�C���yj�9>��)s�>�/t���%�L�o0��cl?���2���*����]���Y�����0�0���D�#��8�6���Y>�M����n�6�F�=}a6�C?�EA��	*�$�@�Z���	��A@�p�pL��p�p���(�q8�0��A�6
lTi�@������6lS�L�+a��y���{�Z���i�J�ofA���i�i23�t�����{�l����/���F���=���Mx0<��i�1�&��Lp�t;yhb
�����"h1Y���P�)b��]Z_\�EzA:�o��p;p���=n|
�X�V\�&�JD[i�f���$�	qq���c1Y��T�)^]�*��y�<�y�ML�=GH���{���
�
������E �(� ��X��CH��`S0�E��:r�w�#�d
���O�o ��`�F3f����?F#�����,��k���5k���k
5\
SS�9��]�����B�ns
���w�������������Q�e�G`�5��D��@�\s��Y)� ���;-U�I~}�����U��E8=NMht|t>��|���5���F"��`h��sD��_cf��k��<9��s����v�"}9+��e��)tm��)]*]�W�r8S�La�����UXf�NVa_�����d��������a���g0a�1���3��Qt���	������b����G�����w�62����[���ML����W���?�����[���)�D���~���B���	�E��'��	��m�o F#��`�s�����v��i�O�lrs��!C��1Ja�o�'�0[����XL4 �?\�����g�d�d[���Rb�*���>�O���G�����z?���V���c��py��%;r���f�J�����u)��VXT���K��a"f�?�a�aLh?��r�����[��h
[�\�]����G�|�{����c�MY����`�1����qpc������`S���W�9����5�u�����^
�����t:��k�e%����0k��0��B�����Z�9r��p?�ar���!���1	��B��	����&.�6�d� 9�T������[������j���#//���C�/���M��t���7���_f��c�<��^aY�v;v{�)�
����H��f>0�i|�'�s���Zo�Fc� �|g>E�"�v+n�&�����&������Vd�C����o"�!�������n\-]$f$j����f�-�H|��qe�1�3��@�c��J����g�-��nA��=���.����o�}�k���7'������w���6���/��j
��B�F�X*�DXVwm66[���WT
%�!~~~\������B��wZZ���k]6o��+s�%22����k��E���xc�I@e�CZ�6#�[���HA���U����M^d$���Q�����S�I���0*2�^gdP�t�k�L��tR���lV�O�?�H��o��t��������W�������O3s)�"��C��W�����i�{H��k���t^���������R?:)�0��kWWS�����������o�h��=�����t�X�;x���������-Mc��r�"g��-�?��m%�J�
�|p,@
GO�������wi�c2 ���#��R5��`����
U���<��Y�7���g.�o�K����[�9���z���b���3c�V�Z��`���?'99�)S���\�Ov���H�}y�.|������~r�
�n�r��l��	�j�(�G��=��jmw��AY5NVVK�.���$��(2qnu%��(.�����jo�m�b��!2���z��P��;g��{��[{�Q�
����B�J��P��� S�&�.��d������Q�[�z����c��n1HUh�����F��6��������iO��l�����-����+	�����
�K���@!-��R����������@�u���<9��m~!_���[Fc�"�����9����jo7�Mt}�G	@E�b60�\�[���y[�Jgf�����1,Y����^�'((�Z�ok��#G�>y�WN��!Ep���|`9�"jpx�C>��C4��������I��#���.+�o��y�J�`� Y�L�~�}�
;�g[�����CBB�W��R�\{=~?��f-Z��#��};]����u��?>����������!2��p�!������h���V� gu���a���0G��6��n(�����K1
�����d�l���������l6������Fvv�f�x<��f6m��N��"���7�W�������g����!''�<=�<m���U���C	H��`�Y�:d*�������k�)42�_�n
||��q
� ?�S�}��X�n����b�x2�F"-�S��j�#�@F���������Z�g#?��X�bE�&(_���#�2*s����Z�������,��~�����B�����`=��G;RRR����,���Y�d	v������D��f#��Wa!h`6s* ���@�f3DGG���L�j�*/^���_�c;�8%�K��v;V��k[nn.V���t=/���&,-m������f��\���Y5J����R~����������9Hk���W`t:���`T�����1�������i���	@���������:�����GS�Yn�U�s��c�h����F���D�G���9���\��F�������J�C�zpAA�6T�k��WX��C���Z��B\�^�@�����f�T���d2a2���>n����m}��������x
),,�����ENN���ZyAA��\Z��f�___L&���8<�=�,���V����7�)-� �g�����,����*y�d��bdDew���,��x������OQB��}/X���d�;���9�qF�#E�������j��kAF��t���2#��@ue`PT%�!u=
844�[n�E[_����E��V"�O��?�'�9f��`	���?~�����!BhB�������C��x<��n�v;EEE,X�����l�������B�"���@�X>�����tB/�aX��k�y�_��*�'d�H��p�Z��$%����M��@��3r�d��\�C��s;2uJy���S-�9�s"�P��?��������"�g��g��nq?�>����~�E�+��`
2bUBK.���2B�}�?��<E���Y,K��xQT%�Td\
�y��PK��Y��t:M����uk
��o����'O2`�RRRHNN&11�;vCLL�����3��j��_D��)�/��{p%�R8i���\g����
d��a��;:sy
[]�����_{d@�H�	d�����W!#�KD��=9CNsd�����z�
��>]�?"v �cg8�CT�5(�6�
������s?;Y��y,�z_�<�!�k�K%��/�G�CJU��u���^p�UWq�Ur`';;���dv���?�@xx�&[�hq�tG������Dtq����H��>2�x�e���*.`=���i����#����v6Q>���N�"(E&���_[���@(��W���7�+����#����*	�jX��H������|�elZ��%n%� ���)~��	�K�	��"
��������?w���O2����8(X��Q���>H���h��|��5�D�U��� �7�x������K�t��������$''�r�J�V+-Z����!!��7.)|�;�}�G
�0�C?�x[���i����4��]�CB ����/M�|>��B�)�o�w�?�u.���������8���1����L��Y�z$���2x7�3X��x[#�"��o�����6��7��� ���7���RJ��<'�`]5��e�`=��G��+4�M���K��z&//���z��U������w����Y�@��Y2\�f���2\�W�B
7F>��#�8���:#�rSU�kJJ
EEED+X#T5
�|�������a��+��9<�i	|i���,6.�D����LeTF
��3g�@&I��J�*���t
g|������/ ���CZ���c1��l���(����/�JIn��������@��lEZ����b�j����Q� �����wbjm�D�	�9B��t�<~��MD��wo2�g��/?��	$�uW���?V���-'���kH���p�;���v:����3�O?�uq��Q�f3�c������DGG{��ViH+[�q����"��Js!�����?#�`�F��./.���;�i�J|��BZ3���>��#��tE0#}R�@Z#� }
u�0�I]��.@�	F������N�%���$�T�g����V�^]����Y��(����b�s�/iY?���-Z�@�L�&�������������l���[
�p'�1�6���{����j'4&���P��Nm?����d�K�(������"::�F���C�HMM=�K�6m�T��i	>���iu;���(��$rHvR�����d"�D����������%!svF	������
y�7�
i1�B5o��0�k���� 3R�~y��#H���Jr�����l&�l�*�R����QC�
E��cg�k�0�����_���/�I�&l�����`�s#X��b��v���e�����kmt����>���.��q��c����|�V+F���E���i��1QQ��\��
�H1�,r(�[�pk����Hk�m� #]q���>������9�k/�&��$���ek�-a!�
X��<f"�i�C��|�3��.�2�O��[����t����k(��q#�l��G�r�����,I+���������0���h�AN���o��F��V\��������57����X�-4������T����ki��q�������������A��*~�M�N���_�� �_D����=�G]
2�)�AZ��G�w#S;�52����d�Q�l��Q��p&�GQ��!�z���`�����
E������eK�x8���`-��*�2��_�����s�Nt�-huC+����#�<����ql�1���GV�,�����n�y���3����B��oYYYdee�@�
�Q�V���k�a���+H���jn�Od~���d���R���6�@~
�Rgr(�d��E�����9�_rXR��_�������0���aW����Cj;�R�.�G���%t��3�_Y��M�����mC���
a��8����w���?��G�;:j�v��_���3'���?��i����n����������p�������wA�����p88|�0iii��n\.�N�"55����z�#..����(��qr������|�z���2���	�,3�a��������#�p!�o"��[���q���:c8����G>�g�$oh p#2��/�'�WQ�%�!u}.��N]��r����r��TBcB�������u����=_�!�Q:��o������F���/�QO@�v������7�v~��W\��X�t	=z��A���2o����={V��V�6�����d2�t:0�q IDAT��iN�������6]�����ph������0�L�����x�x<��zBrss��q#���DDD��i�C��*�5��|���L�0�������?�"���#�(/�t��ad�p���^���{o,����308�A!���u/A��9����
9�_L��T\h'#gZ��~�#2E�
*>�q��B?����UE	�zH]�D���K�Q����S����m��!�o��o/�F@�n��&�?����-Fo��q{��(t:=�����Eo����q���r��8�1�=�p#?���p37s3 ��.��FFF���������O�r�p���biii����������h��9M�4!$$Dxe��+-ZD���i��=III<x�������W\q�X�k.�����?o������v���^@� R�!}�F!#\�/���Q���&��k[�����{�B.��<�s�a!rh�7��"��1���	��>^�M���y��LNN.�<**J�ji�?p�.���m������������H�6m0�����l������ Z�n�
�&77�}��ALL�y��R(��k��V[�	����>�N�k��Y38���1^e����wo<���[����u��z}����v���Kzz:���dgg����
����J��-������DD�?{��T���wF�4I�Z��2*�A��D�����~�|�{���
� CD��Y��R��]hK��hF����4i�li
���z�W�=99������}��<'�U�V�h�����M��:u��l�2���h��9����y��g�������i��m�I\=w�\���:"##/wW��/pkI	
&O����������������n�>�����;=������}y���"��KJR�G
K-2���K��z|F�����9���[G����`��%�s�=�R�P�TL�>�[o��[���<������2���7����{�����?�N�������(~���
�_O?�4�������L&�7o��%Kh��uM��z��d�T*���$%%�����U+4
���TA�y��d"  ���"""����]�vDFFb4�d6���P(���!&&�>}����OJJ
[�ne��E4n��V�Z��M�sm���a��.q�/������Z!s��U*���[�r�����K�Z-O s����_~�o����/�����#�sw�"��;"��x�
<AY2o8�__=��Z'=�kK�.�g���}�V�������y��x��7P(����u�]t������o�>������y��Gq��<��c�5�C����y��'���o=z46������[n����4
.����c��e<���"F��w�����Q(�������:�������� �K��c�8p�*����d�N'B6n��R�D�T��ys�v�JXXAAAu�������.\���n��{�8!!!t���n��a�ZIII!%%��k�C���i��5!!!>����l6c6�������l�b�`2��X,l��	���!n���S�VE�0aB�|A
�EJ%o�h�kL�W^a�K/A���������@�s���R�M�o��P�����	���-�H���SE�[����Z'=!��5:�M��o���r�����V�Cx��'y����={6/��3g�$,,�G}�k!|����6m?����?_~�%���g��1�����K/��];V�^��a��>}:}��e���������_?�o�~Y.���V��MQ��������
���-[Bp��eC�Z���z����_��U�*����t����;RRR���G9p����;���^1XS���TiUY�GJ{��GZ��3�,~~~�F:u���`@�����������={�D�PT9��xV�����������������o����#���s���AY]���5����{���0((���


��6*11����
O�:������������W�
>>�o����D���~���j��-QQQ$&&r����m�6�x��
uz���R�$11��\��=��S���/%�������^��VUX#S�U�ZM��-i��%~~~:t�]�v���z��V�E�V�V�4hP����a			U��q8��=���oT���b��E� ��M���` 66��PA�U�R�V3f�V�^��%K����/�<���<�[��as��}�m�T(���9�G��B$������z��\�Z�����z�j���<��s<����0##��s���������������aaa���_����t���),,<��Z�&$$�����.G��jSpe(�)�Q� "���\j�=��k�����{��/_NTT~~~���SPP����V�%<<���p"""���� �z�����������j�F���	#44���7s�u�U�x6>	\��t�y���[���s]�����yQ*i�����@���!�K������=�����xd����k���N��*����z����i�B0m�4{�1BCC;v,v���N�����}�:���u�F�w������B�����j+��8�N�v;Z����P������������_��S{O��w�t���}]>����oO���������9R4��Z������'��NG�-*<P:���(,,������l�������j�Jhh(�����@�z}�����`2�����l������Nff&������z�

"$$���xbbbG�PT�bb"x�KJJ�}~�8�N
���'66��s���gO*����o��&���'����0m'_z��V���z�/?�\8h�v��?u�<//��z�����QmW���z���_���T�-Z0~�xz�������������k�.������q���q�>;{�l�����}����p�
�|��gq���3e�������_�:����[���o���_���g��_|�Ez����J>���
�g�f��|��WL�>����������_0c��z==z�3Y�x1j����Xo;��g��@�}����4�i�?�3��3�rO�j�s�����aJ7q���i�;��������j�������S��m���i��v������?`bb"��/'44��]���E:t�����tZ�lI\\f���G��e�6o�����9x� G�!##���T6l���;��q#[�n%==����=J\\7�x#���'!!��DRR����n��;[��YYY��_��'N����-����
6p��q:t�p�����o���U�V�n�:����h������=z����6\W���/���MIq	{�a���(:X���&�H>�G�INMf�����]���8��]��B��.7���\�P�<���m%06�����c��9�����8|�0����7��D!�5�q�}��y�f������{��kIII�t���f��1o�<�
��+*�i��)#F�����'!!�:���_{�/))!$$�^x�'�|�F�1v�X�|�Mo���BBB���o;v�y��P(��)�k�h5EFR��/c|�ogM��H��$&n���v�9V�V'Aq���p��^�/Y��z������������`�/��W5 a�o���?���z������c� ���hjh������?
T/��>`��a�l��Z�T�������X,�z��L���/�����n�:���i���7�����n��p���?^0������[}Li&>��9���E��������'(��;wwL$�P�'�&~t<��S	h��_\��]�X|�b���
��=B����0>K��yC��zXkNl9A��
��Z�L+��O@�3j�Z��x������D�����{�������]�8q���NFF��m����:���[�9G�����c�*�Y�re��Y����l63`��������+�q���(��3,��
O$Z=5��b��	����������n�0��C;y���b�U�o�>��H���Cd�8�L������������f��A4n��������q��1~�x��������Kl&N�������1c�-���8�}e�~��1��9��V���[o�u��L�6�c������������a�����0 ��,X@HH�����Q��BV<�������W�q%&`��[�w��4���BF���O^�-n�����;����~�|�'�-��MolBA��D:����/���E��vd��u�u�Z����3�h4<��!���O��go��6#G���W^a��QL�2�R�/�@���5j��s����F���7��j�2i�$8p =�_|������'�$77��{�A���sg&M�D��]y����0a����^`����m���H���.q�09pZ���K��p9\8�rSr9��1Jl%�]n
O�Js�`���Z4F

�jH�q.����n�����/l/@��P�@�#���rN"s����Pk�(�
�A���1jP��P��h�������iF�R��SR�Z�J�"�@.��!�4��M�4���h24h����u-'��������*
���j6l��E����[��A���Y�lY��	k�`��1hU���1g��K��*S��;g���U
�5g����.��H��9��v���-�����p/���a�����,��A��l��rb�	:�������O���=z��O�{�u�w��,\��W_}�3f�t:������
7�N��U��8q��w�_�~�^���^"$$��k�����sg�j5C����>��<���c��U<���������9r$�~���?;vd��%<���|��'�FF���)�H�yq�[#�
S���F�[`���$���y|��#�K�.q�T*���^h��V�����H�i�
�����Z�F�E�U{oD�Sf�:5J��(�����������;Y��J� ��6
�c$(6c�QnGQ����av�$���^��4�[���,����t���BiBH��~�k������ %��.�=�������a�����4J��<�j%%�2vdP�_�-������&��fe��,
P(/|�F���g
n�%��%�B��$Nl9A�����9������I"�m����w�����kY<n1�w��1�^mE���H�����I���>,X@ff&��
;���������}<
|=�4=�����{�/(8Z�_��b��1$~�x��;����^A�Ne����b�����`��CdY^1c�����8�K����������H����3n��?��L����g����\.�^���4k����WSRRp�:���l�����J�:k�������$�v;���y�������l���Z'X�"���������D�2p�L������`���B��������BH<�����~����H����eX0D0D�����3�j%j�c��f�5C�C�U���C�A����C����'q���+�����,��f��n��d9�9Y{S�	S�	k�}����@�C�������G��M���x!�_\OQ�l��e��d����
mJ���{E���<���\���B���b����z,��zl�1v~�����Xs�7�Cp��
��
<�O�����a�|J.�t�,EiEX�,r;��%���?�H�@-���r	������Bth�
S��;��Dw�&��H"�GED��
 �8��[I�������w8���bi�Nsqp�A,;���J��Q�F�x����A��D����Y�����KE`` w�}7+V��������n#88���c���\������4�6�SR�Y>a9M�4!��dN�>r���V�Y���Nn]xk���#���>���b>��9vM�i�^h��F���1F���8�#A����@������J{���Vj��P����BO�@�����y��ox�!;V
@����^�����I��~�,������`�(����f����2m���=��>�9��(�Sy��_S5upC��4�k��2l��Kr��@-�"�hw�|{�7�L�'
9��{�������p��:q�����A�Q���h~]s���N���c����~��<3���VRf�,�^�nN%cG�3
�m�V�,Nf���qZ�8�NJ�
�E�e$�A�Hq=�0Fey�C���b�lA �4Yg��W&��S���vr��`�6yE�\������Q�]�[����9�Laj!�'
):YD��"
S�znJ.�C>�5�,.,��:��|��t���
�Z������Zj�Ev�K����3���8er(��p�u����X��#�������.����
m��X�A�*l���AqA���"
�������3����'��W_1|�p�5���Qb+������u��7���t�#��V G)�f&�M������$�#]�N�H����q���t���c��*t�:b{�rj�)vkHq^1J��^�]�Z-�9;�.����bIl,a[��`�2R5b��oc�R�Z-���f�BA��/"����OaXv�Mom'�X;�	���ZQ�\��`��6���~�*��V���U��\���UE�V�0���4��F�����^��$����\��th/�O�Z�&�m����C��T��gc7��f[��`7��z7C�E���u�	osfR����.�� �H>�{2�����E����
��"�����R�>��+"�g���!��E'��D�	�]p�k�C���� c	lHH���iB`�@c+�����0>�V�..��e��s�dZB���X��2:����62��O�GX�0��g��=V���]p�C��
�g���(�J��V�s���X��Z�f[�;�'�sJ�+]�k���<��l���Bc�C�� -���n�%�������5��_��K�'���1���~�t&�O�y���.���RiT!h�]����'06��\��v`�=K��:���ZiI`l =���w7}����qj�)M@���3����%���3��o�?k?�)�^�Aj�|������Z��r%��Y,II��y3���Gv�����qZ�Y3}9���6��)5+&�E��nZ�����4�������t��O,o9�qq\���d���v>u�FG�B���<9N�lB��[�`q����Uf1L��L�U�\���q�W���%�eh�HA��E��lV>����vJ_H��,OA�T�P������^K��DP�����AuQi��a����o1����r��}�t!:FX��!(�-&�x6?��3��E'�(q�H��A�WI���Jw�M������&��4������@-*�
m����y^����������hsS9�^��P*�������px����x:����vt�hF�IZa�w.����2a������f���@���>)�g�0�(��y/��d��|��v�`�VE�%%a���mV+=�
#k�v���`���6��S�1a�f39�]��]�i\(�n��o�0�w{F}���E���i����\��J���-�{wx�et[����S�t5������!��������g���+x:aaa��UN@T����Z���J�"�S4������h�"�������2}���/U'���(
��z�7f|���]��C�1j�(������?����S�y�f�W��r�]�Mv2��D�d@�p������]no�X���7+�������p�.�T�k�8��B�B��,�7�F�Z�F�����F��C����b�I����F��O�����(]����:5����8�J?%a����(
BZ����JR/� �*��'X������������oXID�LN��������R��G���HE����_Y��G������Ih�^~��[�������=e�11X7��[n���q�7i�NA�&��������TS�Q"7��5%P���|����m�^�!���(��Z�������>*��A��g�f�����Ci������~Jt!�3�����Jy����D��
��<��0��isz#��o���.�X������xq^�����R�H�0�T^,����������?�O����2�F��\.�f�">>���� IDAT�.]���~��Y�������ys�f3�-��v�t:����+�;m6��1c#F� ..���\/^�J�������#�Nk�zXO��9d�W^I�#����k�c��#t��'~a��Ud�����`L�����0q"�|@VR�M�D��X�K�������r�c���`�pt3g����H]��5�������
|�����Fk]����#.m���Hy�(_2b���6��?->>���H,X@ZZ�O��%�G���M5���=w7�>t�6�K��'Y�+S��O�IKLc��}g|��+�um�����Nx�p��gU��8���E}����Q&���[,:v�H���+���am���k��dff�l�2���>��+V�@��{'r��iW_}5m��!--�+V0~�og���Q/��6o�05��������l�0s&!W]E���p��Ph@��
gP)'N9b1�>K����6m8u�$�7m�q������������STD��y�Ib
#"(��$�33�sy(�X����`_��`��i��{��E4��u5�)��Z-�\sM��bV�J�!�Pi�fJ71=a:��~�G����M��l6�����Y8�NB[�zS*E�� �}$a��j<(��������M������m6����4�j5QQ�����G�2�~TT���84
�v�"**�B���n*K�#��T��;��R���G-^�]��`�II�x�	&[�}����&/����?����]�f�u�Z���:Eii���D^q��M��m?��r�f|�1����6y2�v����'�a����w��D�W�0�6�{�-pC�m'�	p��@{�f@8�o�#@'�&��
kj.����onBc�;��6m">>�F�|�������>�wl
�LRe������?F�����F#����m�b0�����j.
^��l@��K[��� >ko)�S���Vr�eJ)� ,/k�A��a���U��XRR��5k���[��o9��������'N`�ZIMM��tr������1��l6�V+����bv�����cY�t���Z��Z������b�`��E���-��'�	��+���78-.F��mP]~���X���#���~������H�r&~�k�}������o����6��D���PXX��D��3��z���?�9������F"�������g�1@#`0��U�?����3�t�������i��E����w���c����N�|,�X,�f3&���dffr��Ao��bA��c4Q�T�t:����?�0HJJ�h4z�Z����;*�a5��w>t!:bb�I����]���M�%�@.�������9���_�������x�OF������W�?�tI��6���u����[���3g2o�<~��ggnd��
d��E�^�x��G���{����		����E�1d���
���~��'Pg�/��zX�UQ�����'*>����	�_z�B�BA�#h;b��o�������j5���w�U3�>QQ#����@
��i	��������:�+�?�v@�L�Qiy?��P�K�
�:x�l���\s�5�V�U�B�
���>g=�P4�L�X���LAA���l��58�N����h4t�����0"""��0���C����`����e�

��k�k1��B����ewqr�Iv��M��k$�X��w�^���/�Z��2���Ua-�������"���HJJ����S�n���k�����W�D�P�v��5k�V����~#&&�C���r�X�z5YYYdee1t�P�-b���5�e��H�����(� 5\��D����r�����!�����!�}�.��\���r��#Ee)�O�5K�6��@��B���F^y��y��n7���dgg������'��cYYY2mHHDFFz����L=r����>UUT�*TOV?��6H[!�xp�`��;��9�+����?��V>��	���C��Y��K*L��R��2���-U���o������!���+�0a�?�<}��a���P*�,_��W_}��{���O?�D��=��t�3���������
6����#::�c��q��1j��D����#_W�����H�6����
�����,���	Lw���������G����X�J%!!!����1l�����";;���|RSS����d2@DD^aX[�+V]p����xfu�9�C��<��z��O�M�E���?�?��ts���B���������;�g��iQT��Q��(�
�J?n�������1m�4&M����~KwMw�|�M�l����$;;��={����c����+��l5u�����zjkN�i6d���J=E�+�����^`>R����p��o2�#���I��|F!}�.��b`p=e~���sN�	���t:����N����\���X�f
n������k29�B���			^�C�^�]�,�����������{�i��}�v&�y�����f�Y�D��L�]h#yq2�-(��PR,�����pZ������� �q��s�I�	�`����	���G�_���z���M�4���o������N�����(��p|�*��X,��+������^�Bu�@z�E���#-x3��7W ���z I��02"��3w_���9|�������(���:~~~DGGU����|���������f�f�a��)))�l6���Mqq�w�Z��V�+D�N���D����hP(8v���^�G�P�V����X�$���j�`�1���>������_�=���V�.�k�w���_�e���h�l����d��9��z��z�RS���:Hm�>Y��9����NJ��t6��uk-r����@O x������N�����eH_�U��5>���>�-���j]���Ga�����U��|��������B���l6�q�\Ufv���(,,,d�����n\.N������+q�\h4���IHH�CWy�3��M^@^�>s)QiU4`���;�]�}�]�/:������F�r�J��V~]�^�Ajk���>q��:4�z�E'%����OH|Pn�/�x����'����s�>
�f�'D�Vt:�E�+j�Z�Z-��eSAv�\No2��>}:�&M��r���Gvv6YYY��������E,�y���

(((�I[3f����_g��um+�(��I�N����/�f���Dx=��^�Aj{�5'���Vy����g�D�fg���V�'09^���p�*�������_��|���(���T*��k��,����p���^aX>`�##""j���0
��O���w���o�~�z�4iB������~��	>|8k�������S/��9���_)-�����C�B�]���h�w�����M�:=�p%�X`�~�u���.����L�f�|�����:���P����	{�@�r!�-Z�^��������|��hd��	�e���%%%������Cjj*�������Z���r����V�Q�T��Gz��+���>J��W���EEE�x���`���4~TN��P(�8q"O>�d��S������������s<�!=�W���
��h��[�m����dp:��`��e�-Z��O��f
L���U���$������c����1&������C������������8���j��B)z�3��t������`6��K~���
���������3'��a5�[8p���s��E~>���Y�|��Z��y������h���i��%-Z���vs��Iv��M�V��+�����c�X�A*1h00��O��s�>��S����O3y�dL�&�g�=��{7C�a�������^�A�Z0B`G����c���_~YV'/O
���!%����������������

�p�28xP���kR��&��BH���-RZ��}����j�Z�����RcQ�O<!�������H~��.E_R4m
7�=z@�����p��p��s�\~��N�
:u���^�������l6���?#c]��QT�X�_g-������n�p���j+D-q��������v{��5��^ah�XHMM�����x��(
������T*�J%AAA����;gse�|#""���_0@F��1���v�������:u*���@+�\,��R����w�����{��8��/���������
����<N'=*��������O�n��J��j5���H�&��p�ZI�����`v�w��R���/_Z4iR&R�����uk�?8�Rg6���o�r�����X)��������J�/p�@Y@����Q�'N��MR�m�$���)��zJ����c��5����e����k��K��+��H�+�(��:��WJ��%�f���V/k��������r�T*���D����t����!���4�n�J��]+X�n�JQQf�����h$00���������Fch�Z������zX�kQ�����S�/?�<~~���\��F�*{o�6�<Y��(,���"�sr������Rz�G�?��
%,#C����)-x����R����%��6����b4J��UWU,w:��������_��O�z``�5�];���W+,�[9��H��tJK����e��j�VK�E�5�`�^��V�UV!B�����3g�wT�'��E�\���e�o�fy�W_-��1��s�|��������ee&��-�ab����K�X�Z�t�������!7�B[�~~�]
Q�L�@�U^�1�3Z!7n��}x,�n��T*�{U*��0��o"<�������E��N58kIm�������YC�j5{��9oo����l�
B����l��������"~~~�y�����STT�3�<�Z�f��eDGG�ss�^f�`��G�NP�h�X��A}t�
��g�5��IXN����DX�B�I���J\r}�D)��ji����5��_��;��T�2>������HQ����j�������r��`��"�=>mF#�/V��(D�u�<�7������8���~&���|�I���$7z�������r����LUp�����������k����I��@�(�_�Y��R4n7^�]k�����3*�]w����u��f�h�H
�={`�Yo�@y}�lp��������#�����?�u��d;={�����i��\�����|�y��O����gP���K�'��|
III�f�q��	RRR$77�=������Se�`=5N�+����7���d��G�e�!y�~��e��t8U� ��IH(�T\,qqRN�RV6�th����w
�&�?��r�X�{L.���T�'���8�����RL9��Y\,���Iq���/�`����"���~~eb�CM�$$�s�I�������'`�P�Tsv	�3h����p5��RR�wox�%�=|8����~����,���n����+��J,��r�6Yw�)�4�m��7l�(���;���m���l����{��0T	

�K�.����������[|���l6&O�|�{���"����Sy�w���s��r�PR�Y[s�RWHH��$�{K�jb�0��3��(�J�v%�Gld�C��?��� �������v�t)���������0��/����>���<8u����.�X����/������R���
����##�����;n�<��f�)��Bz���y�@||�vf&:$�5�w/'N@F�<
�t���7x��&)�@�a4J�b�����9"�t~�l��_Dxx8#F���~�����w3��`�^3��������_~a���<}���2r.rQ��%�W��reX
�|
�.�{��`�n��v�����*�j�PRV��]��������J�X,8/��u1�jSJ+r��5�t��&��� B;��9��Sq���%�s$���	����~p�f��=,�����G`Cj��x^^y#$

`�l9\�� ��,���O=Iz�T�f�i�]n�����^>xL�&����
���)��>]}��5k�E�6����
�l�x'J��[o�����R$��?2�������FZ��� 1G�f��vi���#iA\��v�?�����v����Xw�����R��o0S�2|�p������d`��_X�t%O'?
��J?����LD�3��Hq���u��D�p��G���������c��A�_:�t��
�B�W��p�r�#���T�_I��9��`.�����"��h���
4�M ��8�:M!�F�O��z�L�a�a���J�l=|?u���~b�@�|
�( '��\P.F�`�~^�NX�
��F�$-[�]Ws�V���"��
d!��?p��\
T�m@G��F�E�y.������S2�}�Ji�z�ii2D_u����$�fIQ+��j�L2���3���R��z�|`;V���+����8o����y�)i�4����!(Gff�(���e����p����|�-)�/Vk���{�J_��C�C���:/v�1�����L��T������.`���#�e�/|��K�$����B�.����vK`���)�9/��R[���� sN�e-�X�?��������I�>Z�������t��J���`���&�	g"iD#\���,���bx���7�b��IN��/]R��#h4�,}���^����J�+m�v��C0�)�#o6����}��c���4E�A3F#�J&��x�Tu�/��)p�����\�/���l��@����W��m��>mo�L�T;���
��!�*�|����t��@Q�9�|m[�?���:���`���>�a�S���$�{=^�~��<0A�K	��rJ��si���.m�8�*�j�,��r���gtWT�rD�<�c��h���~���#����R0]�a�d�9f�g��$��?_��]a6C�nE�����	
����H��,�q�����q��������{i=?^�^y�,?}$�y[���E��{2��cG�
p��wiZ��B!�Q��SS^q<��L~�#�SS�q�le�n.��w	�K��>�>����Y���VsM�����B���~����g���6����9�.�1f0�&���B!.\�����0������0����;lb�03�ALd"9��s9������Q��5^�Ad&3��������N��	���)L�/��-������c����r�{�O�OzY�?�����wq��&@l�z,�Np��^�z:)�c)�L���DLG>��[�/�]���|t�+]J-�4@�}��7�g�s��`}���Y���r�����)�������{ed�����u���r8'yN���y�s68l0�c��������(�x�,=������+�~����E^R�7B^���Q��h*
�a�?-��.���.����5R�+}m�|X8��pz�"�t�Y@F��iH�z>�#��U���������n��Lv��Xt��la���3�kp���=eKY��N�=�b�i��K�r��2���!)x���Q�}�H?GX�DZ�5)`
�V�U�dp��.�H>,}
{�����&#��W%%I+����	��b�R��I���3�����C���.����~�Q��[n���YC.	[����)�{����_�,F�W!]f��#�����w"-��[���?B]M���O����[���� �8��������� )�����k�i�jV����vm�9�=�����<��<��4�)��7a��M6�}��������-���Ti�)@Z\Q&|Z"o2��9������E��V	R��J�+J_�@(R(!����>�6yc��Lpn�Q��������d �]w�?�������r�j�"9�KR�L�<v�����9E��IC>�_(��&`�a�R8V�O��B�DEQx)�#��<��#_�B�u�HA���f���b�#|#K�������z���O�0�P0��~eX�����������u��R`i�r��b��.����bO�&��6���w�|����� IDAT���R�xf���e���!?�I�R\,S�������B4���������e��_I��KY�O+��2�E��Ik��W�����L�F)N��J�v��'������DR���!`�[Zy=b0&F
�[n�������p`3�>����0�~��a��|h��D��x�-�&C3L���TQO��o8�/��W�+���N<!�8��_�����!��-���������/&���S��B�����/���b�'��7��B!v��b���b�{�DJ�QV,�]B�%B�-B��5p@K�����"�����f!��re>��4!��>h��8~\�'�",L�[n��
B\;I�q��	b�P!,���z��_�s�k����XB��B���\`&�x]��b�"QqBa���:!���N/m�V!D����i!�z!��
�jc��+�
�$�!v���t
��B4h ���B;�����O?"*J������1Uh�f��7�����X���2N���7�h�B����x�}!23���}���q�t��X�^���~�UB���G�^x���q��U��F!���eB�rT�Y�W�k|�R!�/��.B�P���B�;�sGi�!&V�K����{���{��~j��o=�6E'��nvs����%,�U��,N^�KXB���0.O�P�(G���&4�9�9q��#A��
��u*�v7c�3[y ������(+%�c;�Y���HK�g	CZ�������������1���o��RX��t����g���-����X��V�mH��g����|�R�U����� -�U���f���CLi{�V�HK�sH�����4�,]j�ZF�8 5�5����T�E�
��+��w�4M�CFW�V���%�����������C���9���������~s�J�����=��`��<J�.�:UF9�������M.��/�q6,���&���=y]wA��������gJ(�\��x����K�U`&���x�t�i���t������O�K�95q�����{����v/�����I�$������/:�b��/�b��!��������5��}����[�n���Z\-��5b�,�'m�9!D[!N����d��&�]%���k#��c��*�]�������o��}�5e�[%D�!�Np�����g��}�5e�)�kY<�q�"{Uv�*g	!�
!�BD!�B�O���vZ�����bB�B�B�"������_�!::�(�%��O�Y�32�������V.���:����/�g��[i����I������l�$D�NB��#���e�iiB�����g��>*DD�������$���!��b�x�������!N�(�s>�b58����d�N>o��q�,�ux���Km�����'���/�b��f1�_�M9��:�Q���E���M��U^w�Yt
�uB�R�H�����KW����.v����C!��Q:�G0"�|ETT]��@xfM1�t�#�UvF�z.���9���U�r���Nd Q�o��eo���  �)�����+��bN��EY@VN�:�H��p�/�0�n��R�	:��e5���=�>�����O>�3��~{�Y@����2J���d����"��M@�O�e��g��U]��T6����w��QU��?��$!�	zU@� �* eAE)���+6�u���
�],(E%(5��$t0�{����d&!��3��y�k^dnN��D<<��|���W�4�b�Z�h�x�����Iz�JK�Z����~���6�����mVW�mSW-������w^���INVi�R���j~f�}�==k�n+�LB :�!�-l�Gx�Gy�7X�:Du��������%��/����=������o���Sy���d�A<��L`g9�\��G���a^f����n�������Y�Y�������m�6�����g1jb�i�aW���PT!\�gC���`T@���\CQ�_h��,T��0*Ls��yh����G1�|
&
T)kWu{�����rS��Taa!�NNj�������.7�p�)!~�������������/5�#��u0�7T��FT����-����?��1�qx�m��Z}6����MS������J�Ww���X��NV����F
�<U�hT��m��|�)/'��_��,w��4u��� Zc�Z����k��T���n�X�o���@m�{���]?������tQW�@RCQKl��l������:.-�$���P�������vP�*����5a+}��U�u��� ��L�����!�_pqmM>�����D��E��C)[�q�����	�}�J&g(���0�P�A�7\��@��-9Yx�
��x��QQP��������(�/��
���7��,���R��xz���j�>��u��^oo��2���]i��e��U(�����c^�P�� �p�V^C���������_��Q�DXy���N���DM��Z�����nNp��P8�/���{�Q�~�x�_-���x�9�#|
�?w�������
�9�Uo�
��QWNME���>��n�t�����n	�N�{���r���K�d0�@�I����K
@TPP��;j�Vp�������B]�"lT?N��7L�d��]Ebb"����|����J��������kI�����l����5*�E|%��jm����n���������a���gZ���3q|��;OcX����2�hxT���3e����m��6�! �
�wW�s�0z������3�6�$���~��5_��E��[�����$))����7o@Fr$g�g�h�����N�������
)PM�x���������cjzK�4����4P��n����8�������kW�����+W���Mw������'��I\\EEER�$&&���[;NX�������z|�./���_��=�'p�cG
���/Z�K��n��t��f�clpA-T����!��pj��{Qae��e��E�+������5�u����i���UL������B����K�F�����?KP���c��9��Rv{����5�����?>��o�y������X#@�g�����l�����~D��t@5)lr�4�{���a�@���w����v����6�X�P��<��G�cBa1������-j����,��R������_:\
��t����p�������_���l�����C]n�j���N�������e��EUo2�G<���z�X��/�T����;-E�1X�Ju����8����B>PVT���������
P��?Sk���~?
�P���*a���'�������m����Cc�4��~�#G��i�&)�t"�rss�S�K'����=�g�������q�6O�������v�R��w���������u77����X�H�xP^b�*���6������\=q@r�O_��_�e���9CN�J��&$$0{�l�KH�.??�/ZD�����coP0�P��PE�Ic�����ts]�0��*6�P��M!����>������gG����!��`��A�~f7�����'::�M�6�[~qaU����%K��Q��?yR-��ms����S0y�
x}��Z���(#7����`\Z{���j���BQ������c�1����c�"�<��,C5�����
�
���}����C���P��_G%�AbF�
Q�|L��=���?�������G���~��|�|!���O"WE�=�v���Y��j���vA*�����|��UK`��AN��]�9�_��K�������>�T�����*��"���8�'�x��������W��m�FK[,�|=���8�5)}��s��5�@	�+�G��>�����������<���SX�\t@5i/��W����|�����������e�4��������������*�I����}�� x���K���o����K?p@�<zT��)11�}����&uXtt4�W�cTT[dd$���M%$$���������x���z��n����;L\��^�;�������� �h���\d���������+Y�:�a��x,��`[�j>v��e-kC�����lO����1��Y<��)��8N�6����Zo�����������	.T�u�-Z@����0�^��V�/|zd5��a�.�T�O��x��nb��M�����������m�u�F����z5����C����;�\b�����M�7�
�e��
%..��r�?	����!�4��f��e��=������g����e��^��&��Ub����&�$2���.�F$�d0�B
���/�[=�x��44~�g�2����;�w�Y�
��������T��/�/s�c��9�����3�0`�N��y�>���I)��a���"C��n��|E��Z���$�y�\����Z�7�;�������]��x$Q+V0x�#�����-��������O�nC1�0��DU{�{K`�s�(�I�3��~n=�mik�������;5xN��-\����1|�p����������&��z�;�.�2���������*�������`sxu��Fn�,gY�J^��2��L�5�y�g� �1�a:��i_S��e��8��������MF�TO��r��W��f��Y��O���q�<�8p�����}��~v!3�g�X����f����
�@�S�/���0��2%���~��c���9����V=���>U�,U\�R�������	���vt����cc#�mn�F��>1�|�#9�q�q--1<��	'H �t������C����������V��5U�y��(..f������G������f�q�L�]����r~[
�:~+��
�8EKZr7���t�;7s3�hd��Q�ma��_�.����+Y���
���g�a������K�g��]�e��q;�#�|��Ar���`k��������\�wjs!g�������������B	�������O���P��s�<	�u^�O��w��>��&�T"������
7����"��&�|��#�
)$�LJ(!�4J(!�L�I&�����D��BcK}�T��_�&� ��vU+�H���d_N=Le*ws7Ct���8��~���P�m,l�1�3������~���W~%�RH1�K"�BhD����J/zD���D&�-����Lc���t��,f1�9M�T��+^7R:���%=ip�A�m��C����f4����7�>Lb�e"I$�3�!�,r��I��	'\p�?�1W\��< ��4�|��W���W�Qw���/����u����X���d�I!�$�H.}����G2���)�PB�Eq��%���/���iN�N:+XA0���B3�U{N`1�g��c�(G1b$�v����Gv��d.�����"s�m�-�TxP@qd�e�^+X�,��6�Xf�[T���B>���F�9�^����3<�5���6�^���&�4���O'�/��)L!����%,�<���Fn�v�I'�X:����$�\�2�}�#�P|�e>���D���RH&�B�����t@5u/�+�����ny��6~��a&a�>��Ua���� �����%R�S@y���^���<��Z2���Ol`���t��#��x�|�q���4�7�iV��+l9U�"�8�Y���g��e/q�������r�S��@g:s7��&4�w��O<��<0`�
7|�1��8sg���tkI��^�V���<���>��7��C�����g2��`&3���9�����A^�)W���x�!
@G{��`�?6b�
7���.ta��@�I�Y�%�@�0��ldCH"��x
B	echBRI�C>`8����hIK�p�7x�|�D�a�-)��%�rs�8w.��m+m�v�(�m�x��V�7��MJ���x���"�!�b�I �\|��{����q�O<�VX6�9�hv�[���n�G�"�g��nL�kA�	�>��9�iJS�p3���v$((�6�aj� �t~�7v��7x��L&�0��>z��v��j?��qg<��g�wW�m�������k[�'�"�H$�l�Y�R���|U{�����"�"�O9o�n�����3�|�����;����;��e-9���'.����DM����!�F�e����n[R: GK�}��	f�r"'�����<�@[�}�@�py��V�����8�ys�G9�6C�8�+��"��B��;�����S���\�
a�K���o{�(��?��>�)&�h~�7����,$�\���>��=����,����G"�\�"I$��U��$%�`�H1�lf3��\x�����j��;�	�I����OO�k�z�A'/yl`dP@^x�G�����j��[~�C0��L����[�����z�>�-��C>�u^��y�I"��4��yOr�wy�7y�j�������e��p�����~��}�mK
@Qc4����s������Y���O�
 ��w(]��jF�>��	'BK}�c��Xb����"�~�WIdc"?��HG�_Y�%W\�V�x����������� ��Dozs+��K.��s�cv��.�?�B
���4��&�;���O}	�^a0�iHC�p����K��U^%�,Z_�hE+Z��&4�����@�E�w�S��(�%-iU��A&1�V�"�0\p!�P��i��il���$r�#�O#
/�p��<e �$n�v������%,��-��p�%�.p�� ���Tr�t�����SO��C�f���	R��������s����1�����9-	������CD�����"o�0�%�
��/}�,N����Gx%��j���^�\����4fb�`:�������->�d����ehMkz��\��U)�"��;�+�� �,N�>Nr�=��3>�'H!�����@lMk������4)����6o���N�9�r�����S�y�s�$��~������
�<�Ug��?��*��"��E��&�>�#	$�LNr�$���������&4��U�����A-�oZ����oou�RIIj�z�,�'&���)w���Eu�����?��"���_���������S'n����m�����wSRRB�^�.���i�7o&**
WWW���K��}-��a�������4h�/YA����u��q��a2d7�`���-��$ZnnG���_���b4��{�E<<4���*��v�(����*�zIOO'=��������}?�./u����k_�(��5EGG���OX��A�ZU�0�G�_��>.�G��8<�	���3����)�<��<�0��J'
����vY�[j.�P�z��,��o��iiDI�S �x��f�g=����\���c�X�jV[�NO!��j����'|�IN�B
-hQaq����5'1�4
)$�����2� �4�����,��A2��pa<�,f�_�����?'{JJ�a��.OII����fOL���a��pRS�������$��Fx�y(�s|<t�K���wCI��������a�������jZ���b5�|���y���x��'-
��{������������/���)S��(�Wp��)l������SXX����?���s����>|8��C�!##������;���C���C���9�<��v;w�d����Y�������PMO�m�|����
������M��k��q�Mj__?�rO��8�4���s�6��W��^=����s�srr����>
-\h������QTTT������N����7��+��Gx�p��K_s���N�������,����R��WQ�2T�d��L�{ry  �<���G9�&6��]d��^�C�?�i�#�RB	�iLRHY��K`^�<��?\q%�x<��W��#FRI5/wTD�����<�����Ea�����?�p��������.v��Wr�7��s.M�+e��#�����t����v�93��n0�o;�U:'���=f����3>\��
9���/��;��3j�v�i�r���r~����90`@��={�����_��999����C������_%""��}���� IDAT��x��w���o3f[�ne��!�w�}����M�6��_�{�nz��JW�^���S���{h��-�|�	���111�n���_��s�2q�D���y���9y�$���A�&�s�����b���xz�f@,������oCII����c�5�"99�����L'22<����t���P��L?���s���8_o�E��4�tr�gqq��U nn�v��'xy��������**]\�kww�y//���7��o��6�)�)��t����o���>��
'�_�V��xB`���zm��Y�����
w� �\�%��[��@��^x�����~���/�����*^��=���+�)��\��V���9��/����F1���N�7��IO\q5�~�����
$�]! |p�	o�����{Z��c�w~�'�#������@����i9~������y������&)�����8��LBB�\����o��d��P���{�A�~��Yv�>}�s�nH6��f�|���diY�D��Ln	����999L�:�E�������s_~�%
6d��Q�c��v��7g��������k���S's�0q�D~�a���+������k8p����:u*���g��
L�6��k�r�w��?P{���l���#��F�GK�����
*�r���(R�n`����0��BL�����
���������BV�*�����H���WE��sPX���u��D�NN�8��@TqR~�!77U4���b**M������M����������!2��*f������<���>�_��C}o/���q{�sF�Y��Q��
��^`���0BC�~}{��j�<�^�\O
��^��k^��"�����X��j��<WW��&�]q5'�+cZT@G�HQ� ����<���e/?�3��*�9Ow���^��0�PF��]���7�o���g�;7�Q�����Vc����a��V80�o�-+�N��o��~���S�2�4��\�z��,��"K����;��g��<EI��[>��S4i��Gy�������p�
����esrr�S�N<x���sg�<<<h���E���'[��_�>���<x����9��w�m��}�����s��A���]�1cX��#�/���`���3��`���[�R>��Vv�TT���������G���T��������YP�B��,���Y���n���[)��@�v9]��������s�*jKJTa���*g@����4�N��+{m�\@�z]R���Z����gp�T�js�������*�����Wo$��wz�>L2�$�h��b
k���q�����.����N�����Lc��(���=o��d�3K���7���-��	����8},)�;�:��/aF�)�M:Lv�fh	�rR{�k�-�w��D{���7�YOD�C��F���m���?���8;_��KMM�p�PPP����6�+��Hjj��MPP�����PXXXiqu�+C���
h�
__.8@H�n:��
}��VpeW./u�������*:ov��"jz���+���������e���J��hQ�������e�&�*�fA��������V^�-���U���!�M��tw-aE���$O��VU�z�-}<��|��6<M��{�����9/3�E4�M[�~BO�%���W��� ����������H>u���#��;��ax��'�#hO|~L��v�gNSL�|<����������033����������U�
�h�Vaa���LI������^��4��*���1�����g�y�.322HOO����"y|��y

%$�l/<00���J{{?�
�{�'**���i<p ?GD0`��+�/�/�5��`2��!P�gk�?77������+�9���z��p!xy����p�$$��cY|�]II���x��`���K����je 4Z���89�#,,�|���T���!*��U����*����!#�@Z��-\k���]����$<=���EU����/��AH��U+W�3e�-[6���A�����o������0�!�o;o����B#Y��������uM������i��-H�x�~ub����3�v~
=����p�$�����ohA&��	������?!�C�tv�?~��A�$�49��M!|���cq�X_|���/���CCs
�|�����y|��mW����8����^j�*���qn��F|��+����#%%�������^��J�J�������r��\����+}&$������"!!���,�s�?nJW��#��V�m�Vi���������rM�h��}���se����())�eK<�����<S�����8|X]=����_N`���4k�MNl������r��]i��!aaj���~���O�y��{����e���J����u�����p�`������O~~���������z/?'�Z�OOO'..���oHC~�w���2�,���y:t��7-���k�m�K~+44�34�x�[��%�L ��4����hh���o
�{����?q�EZ�<JVV]��_�>�<��}�`4�;D���>��F������j�K�N7Z
����W|&$$h�?����ys�h4Z|m�6m�9s�h��iw�u�v�-�X|���P�-Z�i��
8P3f�E���4���M{���5M��:h�f��hs��I
������~?z�����h���&))*�^�2����������8q�������;�~Z���5���������F�\��O����M�5*�������5-4T�||4��]�Z�����5��{5���5���4m�zM��OMKI���<�����okU�Fi�w�9�y;v��v����yCB4M���v���������0�Vf��Z�f��o�}�u���_�<y�*��k�u4���Zxx���S�n>|M�,�M�0�n�����O��A����o��/������v������Y�t)c��e��������C6m�DZZ���3�y���,.��'�����cY�j���������~J�z�������a\�#�������VC�r����:c�M�[��(��WOk7N=k��c��d�L��Z��~9wN��R��6�y����6��������M���G,.V��4���S�������U'7W�{��V��J�F����%������u��j�����c���r�����U*��@�������'�|Uwwu�~�xh��l7oo���:.\�*���&=z��5���Z>�V��>��_d��m�5V��x�A��m����l�V��m9i�V[5P�>}�����^������m?�0���DDD��gO����F#�
������5���<�-[��I�X�lyyy������|f��ARR��-c��y,X�����w�N@@S�L!..����,Y��Y���w''��
ZQ5��W��WL*�o*j��3�������b��9����W�5��A�6j�G//�����>��S���WY��K�W�l��{�U��o�E���ekR�>������BP���>��)O��F���-*,Tm����zi{��!)IP��;xZ�V	��B�l�@����������b��G�����d�OOu���W�dp�z��M�����\Mh�
1U�<�rr�~�����&Xw��F���aC���Z6o���9s��};��7'�(f2�(�������x�b:�{����QF��{w���u�����?����8D��+���U+&L�`>VTT��e���Em�4d�f��m��/77�w�y���H���=z4S�N�u������o�w�^����0a�eK�\�p���M@@��s#G��R���~y�����%s/\���GM���-�u����5�����4P^�Zr�h�X=��T�5|�*���Uq���
�k�Z���J�We������0=]]5��V����U�z���kq���XP�
����]	��coo)�<oF��������V���������Y����h��Y��G�}�����`M���>���~�<C���m��������s�i�aUK���eKOF�lt����&��8o�v�DD@x�uw�R�f4V��hZ����J�.������cZ��woU���g�����S��ig5u��K�0c�]�
k�A�0�r��ku�L!��k��Xw?J�
��(�w�������J��]��U����DD@d���c�[�5n������5�5�c6��������R��y��4j
X��J:lWgg�({��V(1�l�_a!����6���nS��hTW���9�����U�W�������	�����6�?��UY��"5M�GuuU}*(p&+f�*�%�������;�H6l��W_e��M���/��W�����/<��C8p�ZE���n[R:����5�5��n����}�f���aCz���I��h{;���3�i���N��t���+t���-��������,,T��M����������tK��?���b��s�h��[�B7%%�xq�9M;��v�1�6=��+����qA�:L�*"]]UZS��9���73|�p���{rr|������I�&�f��k.�22��q�~��O���I��b��/s�"��u&s��������74���;Qk�Y����mk��T��y��3�����4�T@fd�2=]��c�����W��+�������..�g���
B__u����P�\]Uu,>��VxZ��#<��j�����w���i<��#�������<��������	X�f
�������VB���_���*�UT�
`uE�������o^{���%������|CB�;���"����TZ�����(��(��<?_]�4�C�������?��a�UL���)�]TT����zh��k??�9s��wo	�������O>��f������x�}��u��/�OS�),����P��S���q��@@�
a�v�(~Y�H
@!j���2�f�m���|�����\)��YyP��)��/��kt��u9Fc1F�3����v'��T�U���7����4Q^M��-,I���R
������Nr.^��A��OR�����������0{w�V����s�����^��5���ru-�o��@s����Ls�7�Z��\5?�h���h,��q6�Z������`0��]������IO	��F��X\]����wwZ�v'6o����R�Bqqq$%%���VLL�e{���*�W���~^�(xMe0���;y������H��oH�����?��m������K����?��)P]J�����I��K�.�^0V\Y��������k�m�����\����;�=z�`��%�t�M�1����j�����X�DnL���@t&!���x�w��cnb".�\lT!D������c��������p������!G�a����^Z�."W�C�n���v�����wW��Ni��
���#<<���������X�l�������H(�-o!�P���O�����
������F��C�DuH��


��e���asmG���w���>������O�����������w7j���H2L{�	���c����k[R:���6it�MKJH����}$�/I�KR����c����k[R:���6i3b����%�/I�KR����c�-��k[�������������kL�9%�B������@�A��p�y))���B�������f�����G{wE!�pXR:���D�{9I��KR����~����7{mK
@T��hmG�������u9�$��%)`}I
X?u}������%���I4��P��7��o��r~I��KR����~����7{mKR�:��>~^�������������Ba�J��zv!t�v�H�N��W@���"�B8�,RH���P\P`��!�G
@$I4uk�i�>��v�$��$�/I�G�^}��k[R:���DK;y�5�G������u�������#I4}I
X_��O]{mE�^��9���&����}�L����G���St7Cp0����~I���K�.{w����~���k+2��������N�:��6���G�����}CC�
$����i�t�����������0{w�Vk�����Pk����Vd��-�,��~}�Z�f4��+B!���P847//<8�}���"�B8)�$�������[t�1��VY���D�����%)`����/{mK
@$I4�����~��|�=������4��W�h����$�{�%c�mI��$�f�������s����r>I��K������#c��d��-�Xg��m���'~x�	f�����"�B\7[�,WE��|� 
sr�p����"�B�xR�Z���������W��wW�B�O
@$I���4m�������%��/I�KR����W_2����H�h���	�{�$v��j�G�h����$�{�%c�mI��$�ve]��&�$��%)`}I
X?2��K�^����$l[%�����)3~���V���!���H
X�����N������?�wW�B�K
@Q�t�>�?>�����]B!j$)�$�*����

��u��$��/I�KR����W_2����H�hW�u�\����D�����%)`����/{mK
@$I���4e
��m#�:

I��KR����~d������%)`�I
�~�M�J�.]������!��J$,D5u����B!Dm$�������X\�_{���+B!D�"��$Z�u���k�HM_������#c��d��-)�$�����S9�v-���U�I��KR����~d������%��$Z��4jDX���|�U��F�h����$�{�%c�mI
Xg�����~�oo���;��!��R��J��I���$����+B!D� ����]]�|�=�������BQ#H��$�v�����}�����m%��/I�KR����W_2����H�h�.�m[Z�����Wm+I4}I
X_����������\����dee�|�rv��Iqq1�:u��G%$$��&//��K��{�n�������<�>>>�6���,Y��������F��}�3g�6���,Y��?��ooo����qqq1����c���:t���@F���w�m�F$�v}�����+W�v��J�IM_]�t�`0�������#c��d�������������2k�,���X�j���:t��
�i��'66��3g����|@�����};�����wo����>}:���,[����{���_���I�n����c��)ddd����2e��}�]��?O��]i�����#!!��K�2�|�}���~?��9�rrx3,�����/B!DMa�0Z
����k���������.\�����h�/�4M����{
����gns��
����kM�4��?�����3g���l��M��~�M�4M{���4ooo---�������'Nh��i�>����qc-??��f����������z��������x@���+���BQ���p-<<\���qsG���#Gh����Xpp0��<�y��M�n��=z��t�����;�q�Fs���{[,8N�F�,�>s�1c�������on3v�X����&M"??��~���?���3fp`�J�*+��N�Rh4Y�|9�����77}�t��G}d������];��^�d	����7���XZ�ju���j������bcci��������i�������G/kc0		!66���bN�<yY���P������5I�]�&�����g���m$��/I�KR����W_2��V�
����?��?���t�M����O�
x����\�d��JKK�[�n4j��O>��;v��K@�D/����@��W�6���������h4^�<�&I���:}:V����%��/I�KR����W_2������^���3����x~�a��[���s�4���H��k��999���F��9|�07nd��a���������-�����������=�%K���ukF�a����?����{����^z��}���_?�6K�.%44��#G��K/1z�h�v�j���W^�[�n<���c��E����
6�e���_����Y�z5|�6l����|����c���o899d>O�A�X IDAT���������a�*�������m��9sHi�gw���<x�F�YL�I�w��^^^���U#�S�������aC���jDj���'O�����`���m�]]]	 99�F�G���?����-[��k�.����:u*z��20���7����eAAA8�v���C�=V===�9s&���?���[�7ok���~�����^�5���T��c���6)))W<���7�����1�������jz��������������Ohh�����/������Z���q777:u�T��r���WP�o���!|����}���-�����;����0���prr���m��/�U�S��p��7������~~~���q������������������6Q��H��M���(M�4m��1��w�m�������7�j��u��io���e����n���n�4M��{�9�~��ZQQ������Z�F��y��i��i3g����ooq���,�`0���w�q�6`��6g��������?�\�4M����6y�d�6QQQ�m�����O����m��}p�-���Ba�F������S�6m�}��������Y�t)}���ZAz��	���KTT��XJJ
��o7_��<y2���|��'�6_�5/^d��)L�2���X6m�dn��GQRR��	�m"##���������W�#G�4���a��7�y�������p��'���H��<B!���T�)))��������������>�
0@����ZE����
8P����F���y��Zpp���I-66��n�������v���k�F������g�}��\�g������1c�hC��\]]�%K��?o4����k�����	���p���M���/�m


���p-((H�4i���W/���K��eK���*���I~~����?Z��u�O�<������p���w�^;��n���?���8{w����c����n�n�J2��K�^�VW��HNN���V,GM���y3$77�6m�0~�x���,����o����2�[o���sm���;v�����Q�.sh�������w/����7����[�)..����"::���&L�`�Nae��	$++����|�I����I?}��z������[����cDEEq�]w��w�����	

�{����J�����-�
���W_2�*��	����S����/8~�8999��[7&N�H�z�t��#��,**"66�|K\\��
����N����|,##����
�����`�~����J�KXX����{�%c�R�
�5k�p������J�-���&55�3g���A"##i����uT�p��z5���aJ��0B!�=���R��g��������������~N�:E\\M�6���_���B�������={�<{��]B!l�J��s�x��g.����iS}�Q��?��A�zyq��w�g�D�BQ�U�l��M��*��g��m[�vJTN����n3f����hF# �Q�M�����{�%c�mU�|��wx�����u+����J�[���>������vRX��(�+�{w<���+]}]������/�X?2��K�^��Rd��������<���),,���777s��� �E��:I;��K�pn���^-I4�I
X_��������W�Q)�������tB///���T�c����C^j*��j���N�`��!���lU�V�����J��-/[��o�yyy�-�,�#�
����\���z���B!tU�9��������~n��-��_��j�������s`�*{wC!��]�W_x�:����y��'�����|QQ;w�d��a�vRX*((`��2��]�UZL���l}�E���s"#���Z)::������Z)22���;_6^����W_������s��7��+uB�W{��M�&M���$--�������a�x��l�Y�HMNNNt��>�l�FNN���SkI
X_��������mUzp��!2���4,X@���m�-QOOO�j�n�J]g�`�����9������$Tu�����{���aCz��i�n�UJ��')`���8913*��]BQ���������Ys��_��d����Q��~�-�%�B�F
@Q��=������[���&}�
-�c��B}TiQ��~�������qc�?����R+�^������#c��d/`���IM_999���pb�f{w�����$�{�%)`���IM_���g�������.]���Y3{w����~d�����mKR�:����4���ve��/�v�({wG!D`��\�
����`?/X E�B�ZE
@!*�n�X�\\8�n���"�BX��H�h�*�Dsrr"�������hF��{V;H
X_��������mI��$���K�hmG���������c�jI�KR����W_��-)�$��UQm�����p�\�I�KR����W_��-I�LR������s��9�x�]���B�ZJR�B�0��?��0��+B!D�H(D�<��P�Yc��!��"��$��*K�
z�E~y��
X
������#c��$l[R: I����$ZX��������6�U�!)`}I
X?2��KR��%��$����D�����E���W�����%)`����/I����u&)��i��at�0�n>h��!��E$,D
6���f��/RRP`��!��L
@!�Ch�4����W��wW�B�k&��$����D��D�������W�����%)`����/I���H�h��j-�[7Bo���+l���CR����~d�����mK
@$I4}]Km�������B���[e������#c��$l[�����k��M�@�>}������B')`!��E����W)���wW�B�*�P�j
��������wW�B�*��IM_��D���Z����,�zU{H
X_��������mI��$����I��mK�����d�N��=$�/I�G�^}I
���t@�D���&�,\��7�$?=]�^�������#c��$l[�������o�M��E����]B��$,�
_�����K^j���"�B\��BX����3��o�e��!�W$��$����D���s�[��\	:THR����~d�����mK
@$I4}U7���
w���7��b�jI�KR����W_��-)�$��e�$Z�����+��x�J��=$�/I�G�^}I
��$�3I�]�?��^^y�U{wE!�������Os`�J���!����B��'$�.���o��n��!��t@�D��5�h���'��O���/���6���$�{�%)`���IM_�L����i�4v��
?��-��Y���LR����~d�����m����N�h��v��y�x�}{|BBd�`T
�`0�������#c��$l[5:l4)**�����mJJJ�4
W�+�����899���r�6���������t�6���"�?=�4g���v�a��!����t
���#6ooom�����>���<>�{��k�d2�d�I$%�;IQ��R�K���rQmQ�z�-J�U
�V[Z���Jm���&!����M�u�����57�A�,������|�3g����s����l������O������������7�s��It��
���pwwG�~���^��o��o�Z
�F�!C� ##�l�?���-[������;v,
�L{=�z����[;�c��
���|���:�G�ERR����^x'O��������;N�<���8`�����dff����FHH���q��qH$8:�p��%<������������~deea��a�Y�'N`���3f.]�������x���s���8��wX����M�**���������&�1��u�����_��������1��H4�|""Z�~=�d2����������}��'DD�d�R��TVV&��z�*��[��o�A
6$��$�9~�8�#G��s�=G;v4�q��m�._����#�!�j�������vY���L���y���KK���-�����
�^�V��B���������b�0��#G�P���,�s��j+���������ul���������
��K�R����G��C��_������'�w���G�
s���aNpp0Z�ha6g��f��u��
8r��0g���f��}��9���h���N4�J������M�������H$p��^��]���.`�p�w[����Y�v-JKK1f��?@�-���_�>���k�T*���RRR��j���Ue�J����;RRRja�w��K�N4����L����9��w��s�����l����0s�L�]�AAA���2xxxT���hPZZZks���w��4�B�6m�X���F�App�h��=o��_�^���e�������vuVHH�fG$�{�%v�e�d�-Zd� �CD������ob��u�0a�����~�L���G�=g���(**��)S�v�Z4h��<������h�T*�;+V�@XX��r}����� <|�����N�:�������=z **����x�b�Z�
+W������r�J�\�~�!��}�\�2�=z�P���u�VH$4j�H����x��P���a��]��'�~����������q�q�q���r�z`����|�25j��'BL6y@"�k����;v�@xx���c��Ejj*�;f6�F�;v **
�����m����-["""_|�Z�l���H|��gfs���1m�4,\�����1c���/<��j�R���_��W^y��H$\�t	���P(���`�+���777����wrr��x����\��/9��h�xx��y��y\�������
��=����D���u��7�|}})))�����YC������/�����X����������7��zaNJJ
I$���o�����^�-Z�m�����8@DD�
���{����{7�s������q��M\��D3�L��SO���KE-[�]���.`�p�wWr�.�s��a���X�f
|}}���/|�����1c�R�0s�Lh�Z�t:��?NNN?~<���^Bii)��������2��;
4���C���*�\���+W�h4������G������_�u;v�6m��dBvv6�,Y���H�n��*��;��e�N4�D�g7l�W�F��������.`qq�x8�����-L��!���������pa�������������������f��O?���)�J�����q�*�]DGG�J�"�JE2��BCC��y��������j5I$�����y�F�#�!������P��e�


����{�3_M��lI��r���5��������F�u��E*--�vu�^qY:��*K���k�������R�D@@����`��;gN��mk�Y�]���HHH���3��iS�z�%%%HHH�F�Ahh(���'F�������h�����k���q�(x������1��Yj-`�+�.YM���`m�v�y3GDX;�cVb����d��||��W_��I���s�+c�1&.��N������F�������O[�u��z
M������,���������Tk�Qg=z�����N��+.k�^G���N4qY�m���H��\�����%p���X<�{��]������(�e��(���q#v��2J�����x-`q�Z����+.^���	Dd����Y��w�
������0�� na��E����_G�7�X;�cu��� ��3Fl����y��0�c��qh��M\���V�U+t�1?��b��������]����+.[����@;��h���N���f�TQ��5k�J��.`qq�x8����r�#��q'��l�M"�b���#K������Vp���X<�{�eK��p�������/�Dlt4&����*��1��H��1&{����q����
c��:�@����_}��u�p��)k��c��qh��M\���������~��'�`��s���X<�{�e������q'��l�-d�0�������Z;�G�]���.`�p��-����@;��h���N�A�}������{��#�.`qq�x8����so]�]�"�.`&�b���x5!~���������1��c�.`��}5}�I����M�v(�1�����0�M��>Bt�N���X;�cv���!�D��t���J��k\����
�.`qq�x8���^ro]���N4q�z'Zlt4K$X,� �S'���p������	vO�b�0������]����+.[��u
7��L�&�����$�i��V��*"''�����F���K���S���0n�>�||�����B�R�����WIII�J��v(u�^q�[����@�w3��FG#���������~�����`��c�=�f�=�>���/���(��i�pc��0�f��9{6��jl��/��O�(�1�X����N4q�['Z��)f7������5{66FD���V��z�,.��^q�[��w\�!�DW]�D{��[�����,{�p���X<�{�Ur�=�����G)���e��c!���M�~�kt�j��T����������W\u%���w3k������������h�����a�1����O����?��X;�c6�@�@�>}0���m�X\��gk��c����C��&�����q���_^}�m�Z�,.��^q���k���C��&���������=o���_m��X\�,�������q��N4q��N����c�������`4�q�d��>w�������W\u=���w3[U���M}���o�����1���1&2�&M0��!���s���[;�c� cL�IG�"���p`�k��c�B��C��&.G�Ds����?���C����7�K�X�,ZT���]���.`�p����^k��q'��������qq�u�T��$�kq���X<�{��������C��&.G�DSzx�����w�v�/Z��];4n�X�m3��^q9j���w3{c(+������M��Hna�1��T0�"����?-���?pN����#�u��1�*qXGp�����q��_ ��1��_��W��!1�X���d���h��N�J�O<��=�46FD���:w�X\�,�����kY\�!�Dw���D�>�����bC��:{����]���.`�p��^�����w���;�������A|;`Fn�����G��,.��^qq��,�Pd|
 �Kn:��F����?G����v8�1V�X�@>����QQ��o�<x0�ss��+��1��#�kc�?,/?�+W����1f���C��&.�D�{M�b�����?����C��]���.`�p��^���q'����f\��0��?�w�������=�����]����+.������N4qq'Z�9���k�J%�8��y�����]����+.����]�"�.`����y������{�����Cb�1��+�0���D"A�+�q�d|��K��c���������###����������������
�j�
���U�h�Z$&&B�P $$
������R$&&���
O<�d2Y�9EEEHLL������ �r�����:}:������;v�a����1�X5l��9y�$��k���`deeUy������?�u�������a���fs6n��z��!""]�vE�
�tp�X���������k��M�V�B�3g���0p�@����e���d�3��&.�D{<�^xC����g����~��8w�������W\�{-�&��K�b��A��4##�������QTT�����9&L@JJ
 11S�L��E�PPP���B�5
�G�F~~>���#�5k��[���|�G�9r$�z=����+�{�n���!77�5���c�vmw���;��N IDAT�_�g���_~���&!�������_�u���"���X<�{�����l��u�N�>��#GV�����a41o�<��r�d2��5NNN��i�������1cd2
.\���"���O�u���U�V?~<$	\\\�p�B���`��}���hDFF�������c�������_�e��Qw���;�jG�.]0��!���������������bduw��s��8�Z�M�k��App�}���AXX����1�R����#&&F�nv=_�z���U+�9w�m�


���bbb@D8}�t�9����J��v,M�P�M�6VymG��h��c5���N���������A&�j5�j��C��BBB�R��F���W\�{-�&������z��U���FFFF��)++Caaa�9r��vc��������8��0��V
a�1&����NWW�*����(��������>p�Z�Fyy9t:T����95q��e8;;�u �t:8;;���K8�����q'''a>������\����B�[��}9�.�I����G�P(�P(�i�*����y��y�6�����[(
��z���E�h��E���#�t����{��9������-[`2�0v�X���6mByy9^~�e�[����2d����?��c����U���m[DEE��Y�|9Z�h�g�}K�,��O>Y��� ""�������c��MX�~=6l����hDGG��/����k�~�z�T*���@���;w��\.G�F����;������#88�F�y����o����L�j��&�����������)����0-
7O���������}����	|BCm2~{ONN����J�M�S��O�8�����6O]������q��-��G���������a�]��F�a������;w�y�����I��c��U�w�����KDD��V���I�9s&u���^|�E��
��jZ�l5h���}�]�9����������1qQQ�X�����*]�t�6o�l�0��5����q���6m���(��p��!�)_~�%���[;�:�s��8�V�������_�.������3g���&�edd����x��'�9�Fqq�0'!!���fs~��7�Fa��C�PRR��}�"##��/�����]� �J��9��Dw������:���8�2{���������*{t�,�����kY6��V���c�T�n_�p!�l�h4t��Z�m��E�z��j�*H�R��?�/_��s��R�������t��	���>�����[o�`0���S��dHMME�V�0t�P��5���x�����������3����N���S�"%%S�NETT6n�����k3fn��)��	aS�J22��l�8tO~������#d�1���Z�6�pzz:U�5l�0a��k����$��H&�Q�~����+f�:{�,����$	)
:th��F�;F;v$�T*����������g���R��4u�T*//�����!f���,YB�o�2������0c��Y�#`�;��***�����2��d\�W�����	���s�Z-�J�C��g�9SEN��|�=���?�d	����'c��u�:h���K.�?��*o�������X�x��O,���x=Jq=h-`�\������s�P���OCCq���>�fx-`�p��^�����z����(�������o?p����6m����qb�
lz�I�NL�P�����^qq��,.�w���;����];4n��Fs�����������W\�{-����u|
 c���������1��_�c����c��p��+��0c�=>.�w���;����.����[8���Z���q�x8���s�eqh��M\��&��t���)S�2 	�|��[�������W�X"�b��k!b��]����+.������1���h��N4q�k�*����38:��)�v��2��j�]��[�O����h6`�c����.`�p��^��3�vH�P�M�6����h4�vuV`` |||j�	|BC1f�nL���xtX���8L7~HHH���*�����kY\2���f��w�RL�~���7���-[����0��Z;<��:.cu�����L����a��uH=~��w�t��a��c�j��C��&.�DWmtWgpt�o��Fn��W������]�M�~��{�?>��{H���w��s��8�Z�v�;����h���.�G����,Z��iih;~<���O[���e��-(�2�VLJ23����.`�p��^���q'���M\���d��h���z�����.���`���+�}����{d�,�����kY|;��h��h4�TD�����
��0��	%8�i�0>!!�)�Y�pHH��C��8���s�eq�cw���a�����-����{����������h� �sgk��c��@��G�P�}{��s�@�����4d�;���0d�=�����g+cv���C��&.�D�X]��%�sg4<X��
B���1=9��=�V�?�����U���^�p|�2d��>���_�����,�����kY\�!�Dw����]��C��1��L���[�vF�,Z���t|?l��@ry�nu�����3�r�H���]����+.������N4qq'��l��q�]\��/����7SS1f�.x��'��#__l<���(�u��qq�x8���s�eI�Q���D�0k�2V��>
M` \��j����\�8t�v������P���3��Q���H$�2x0��LA���E��1fo������?D}.E� c��d0 ��Q����������z�B��Kh7a��|R����7�0��,Ur0c��H�P�iT.��
(t:��>���Lx�]8��pvw��F�g�������1��@;��h��N4q�z�X�Z�
Sbc����x3-
���~��m�:|8t���9q�C5�T������W\�{-�@;��h��N4q�spm�9;�_#IZ�y#����HRt�f��y��w���A�{������#`;��h��N4q�k�*���a��z�Z�^�V�9{�Y#��9s������#�o_4��Ry��[Y\����^qq��,.��G).^�R\���5x5k���}w��F��#�r�H����3g�b�0%M"#���g�b�`(==��)�J(�JK�����+.���� c�Y�S�V�x�D&�X����g�"�fe���}���/�;}:<���bP,����'$R�B��`�Y�Fd|��������������B�8��P��B���R����~m�>��������[@@-E������N���#G��_?k�R'eee!55�;w�v(uRBB<<<���Z�p!�������<����Qp��w���
���|BC�?���B��b������U���L&��2++7._FcWWh5��^=@qz:*�Z��z4n�L2��s�"�..�
��>�3.���N4NB������8RRR`0��%>!!f�W��>m�zJ3UT�05�/\���D\��'W�B���p�h�����lY����%��k'77K�����[{������������������8<e
����������z��FG�vb���+{�`fZ�r96?��[�@���>=g���.�.�w���;���]��R��P��fcR��AA�
2[z��F���!��G�"��/�����<�4�|^���(/��S����p�k����	<�r%��L����so�(�q���{����-[�i��{�m���b]��h5j�,Z$�9�e4��Pzz���W#�_?��7��"��8a�=�o\�!�Dw�������a��%2��5�W�fh����8�L�Od$Jn�BAJ
���������s����+���� |�\4��
�BC��;�r%Z�c ������)���������xEq1�u�����F�de	��JKql�RL:~p��!t�4	'?�2�&M����9.c���wfg��D*���i6v�	n'&V�5�p�nE���prs��Qr��p������O��O"��OQ/47Fiv6�B��gh7a���*����-[�u�t�8x����RK�q�cu��^X������}�������0#6g7mB��3pvw�R�w�����q2�����@&�����QQ�
��~}��37���G������i�T4���z�����KJ�����q'���X\�,��G��m���������5�����vb"���.^���l8�����������W�����1p�������9�����S�6j����(-
1�}//��������G����#��]C��HTh�0UT@�7A�1.�w�������]���x�"�5kV��_���99=�s���0d�P�M��Bh��Q����;�� �����dr9{�F@X�#�)�����Gz��p��7��_���qq�����|=�y�ii8���0�����}��l����+W�����v�M��-C����
���A�i���D"��o-21nm0����#����999�v(uRjj*T*|��M�����$�M���7�.NOGFl,R�G��c�:{��@���! ,��z�~����d���	r��I���(HN6�n��F#��ZV�6m���+<^��CYY��[
ee����k�z�n�����R7��Pd�c����V1UT ��%���"�NQXt�&|��A��������i�*��,����q�0n��Z�.�L(�}�~~��]f�x%�cM*�W6��j�v/��!+!i��#�����w ����0 ,�����c�����P��-��Dt�Nx+=������ph�
����c�����NJHH���G����[;�:��o�E���.�iG����c���vs�e�����C4|8��#�W/��%����6���[110���P^X���\�B�\�~���- �z���__h0����b=�sX�q��,.�P~~>6l���H��=�������Hv���N�:q(�M�6�W�^vS�]�qxc�O?	�������C������������(NOG���(��������P��u��P������1??����z�XTzz����w�K{P���K�w#l�k�R#�{-�@;���QQQa�0����R���Z;�:����������j��j���������u�>���1��(��FIFJ��Pv�6������N�@Qj*2��a2@& �Ae2�C77H$@*���AQQpR�!������T
��.l��g���H$pvw�Ru:��r�U*���`��@�=Ke���&�8=�_~i� ����;�	V�{�z=������vH�T�����a�Y�������Y�������������[�UO����
���a�����?�^��11����xv�j�zu:��F��������
���a2Q��!��u���D]a!*�Z��<Y�"D "����6�D&�T&��Z����'4�C��4���Bf�?��9���;���n�:���?��
�YY���#<�}�v�X����,�Y�����r��^G�Q�Tvs
{���f-`�`R����b�R�
�^z�V_�8=��:a��(�u���(��@����y�$�}����Q����<�������x	_j���E/��z=�n�~��������w/�
�m��A^Q�^���{��7�w�^.�D�Uc��:�����D��,s�!"�dd�0%���(LKCaj*R���>9��2(=<��h S(Px�&6��*4��{�e�xyY}��B�~��?�<F���m�0/���c��@�cuNu��;�D",���{�*��dd���5�B[P ����0���PVSE��e��0p�����w��^^P����7T>>�cw�k��E��;��r�g!\�!�^�Ng�0����X;�:+//yyy���*))AII�����dnn���
�}��Lu������?�������X��2/�>]�|�N���<����</�w�,��Eyn.��^Ey^�RS�y�,�d2[����\�$sr������(�J�|�AF_R���4@'��rTa��?�����/���{��U��h����������RRR���k}��RRRrrr���$==W�^��+���"�?������IIIA��(�{���
�����$����l�nn�_M�@@���>���6?_��{��>��-��2�j��4w�PQV���,@bb"N���L&$���6mdR)���.W���X�:i^
Nd��Y�(��/�e�U��A\�t|e�L&�K�6�]���D9�w���g14�v��5�d�1�� �n}���)�����{>�))����s�������^���m���g"��_c�1f���6{h��{w�
wC���0v�X�y*q�@�c����`���*���Nrr2�����M���e99�<xx��YU\2�c�9��1�c��p�c�1�`�d�1�s0\2�c�9.c�1���1�cF�h�=��1��������C�RA�RY;�a4q��U$&&B.�W�V��h��k�p��Uh4�j�@_QQ���$������
����
�����d�d�*����			��������,%URR�3g���������qDF����HNN���g���������`0�������rss��U55Y�����q��4M����r���$%%���
J�������<����wM���"���C�����������x�FxxxT����o���3�J�����"f7233�[�n$�H��������I��[;4�K���$�������������\�GM�4!�\N������D|���v:D���''''�h4�R���/�������K�Z�f���_|A*�����H�TR�����?�0���{��B� R(��iS:s��%��I111��qcrrr"///rss��[�
��F�6mI�R���"�LF�[��7nst:�7����I$���eeeYa�l�����k�����A���'www������sj�{�]�F�Z�"�LF^^^$�Ji��d2���[Vc2�h����V��M�6���={6�d2���$�\N-Z����$����
�<y2I$���&�TJaaat��MaNYY�1���Iyyy��c]��6l�n��������`qww����[92�����Q�F��_?���%"�?����������#""�^O���4t�P*))!"��[�:p�R�z�h�����j�h4���~J2����=k���1���
�{���8�J�������h$�NG�'O&???*..&"�}���D"��[���d���:t(5o��***��KVW\\L4r�H*))���
�9s&���	?����+rvv���Q^^���������|����h(..�������U�V4r�H���-���������

�m��4h� aNMro����W�^����������6m�d���.�NGO?�4���Pddd����[I&���={��2�����:v�(��|�	�T*:y�$eggSXX=����v���K>>>t��""JMM���`z�����:�@;���I2����_o6���/S�f���}�q�
:������(!�����P||����;��Q����?��I�R�u�����d��
��i�D������C��h�"�_��Y����S``������4�J���7���#�s��f����%��~���
���/I�TR~~�0V^^n�>���
:��y�v�"���HDDM�6��S������&�\N���"��msqq��K�����3������f����s�~���9���={��m����W^y�JKK��W_�����/����l���C�bbb���u��4n�8�9�7o&�DB)))d2������~�m�9�1)�J***��=����;�����H��>}���������Rd��I�&��c��kg6���@LL<==��}{�9}��ALL�0�y���H$����8�%K����[�P��� IDAT���DDD�]���aC4k���������y;v����C���#""HII���!����aEE���������� ;;7n��vNEE���,�'��E��x�����k��f������9w������v���^�ND8}�t��axx8�R)bbbPZZ����j��w���
dggW;G��������Wu�v"==P�^=�q�;�qgffZ<&{��/b��i���x{{W�������a�O5��{{{sU||<>��c|����6�dddTy���.33�o�8�k�����O=������cG���a���*/���tU��nnnpvvFzz���w�����G����c���3f�/_���G���S���O@x����edd ##���h�Z���[`Ol_YY
��?������@FF233a2���Y'�H��
����������P%�������d5�~�zL�6
6l@�6m:�j���\WWW�t:�L��q����j1n�8��_������t�{�^w���h��`����Z�v��[VV������CNN���0j�(�7)))�j�p����j�;���wwD���h��%��;�S�N!!!!!!B�N���c�����9��������o�B��B�x��y�p��w��f�yyyPma���L&��=��O����1~�x�1WW�*�������
�T��9�|�,X�F�w�y��s����d2(�JV���������������
�}��z=���+����cg4QXX�Z-���aN��b��h��)��;�m�����������k�{U*�}��w�0���
������WKJJ�����\���AAA�~Ts��M��r4n��a�
"�����������#>|���M�6Evv6��xZZ����9�}Tv��M4k�L��mX^^>��c���`�����������<|���4h��cw��-��n��%����*st:n��-�8����������j���#;;^^^�h4U�]FF�F#�����IH��j��=����GJJ
^x���T�D�1c��[�$�A������U�x{{��v����
4��^-((@II	��� \�p�{���7�;��=.�D�N�������w�����=z�����{�����o���?�DXXX�����PVV�C�	cF����+�|�IaNjj���%%%8t��0��(�J|���x��70r�H�K�T�C�B���}���L��EFF��_	s����^���z�����x���RSS����'�x�������_���k�.(
���������K������������
�nw��h45���{��L&�v�#�w�s���T*ETT
����=�*�
��wG�z���u�j�����U�V��G�`�d�P���Gj��6o�LIII��G�T*��~�������W��R��7�|����o�u����~�ij��	<x���W^!�ZM���DTys�N�:Q�6m���cO#G�$___�������&++����h��at��:z�(���Q��=�h4��+W����&M�D�����O<A��
��n����"j��!������=KqqqA���TXXHDD'N� �LF�f���/��]��~����o���s'I$���)))����{rww�%K�Xk���h4R��]�u��K���G�[����{�-�I��<y2�����]�(11����Kr�����/k�����yS�����M�
�_�v������I�P�?��O�p�����i����v���OR��,X@III�m�6����9s�s���;��d�j�*JJJ�o���T*�^����m���#4{�l���$��I���o�����s'U���Ca^~~>�;��J%���0:~���������g�%�BAR��z��E			��%���k�*7���������������r����S��m	�T*�8q"�������(22��R)I�R����s������};5k�����;M�>�t:���
6P�F�y{{����Q�����	�������M�4�l�������rz��WI�V
		�r_�������o���0��g���R��4u�T�U��*����I@���4k��*7���������P�z���?t��W�����[���j������z����V�2�L0���#������\��������Py��������[=�����������:�|�
��1�c��@c�1���1�c�@�c�1� c�1�����1�c��p�c�1�`�d�1�s0\2�c�9.c�1���1�c�@�c�1� c�1�����1�c��p�sxF��F���C����C�a���x��7���|�	.\(,,��Q�S�!2����1�g2����?����EP\\���#%%�Z�z�m������������|�_�^��`��>.c��������������{{�[�nE�.]j!��s��Q|�����M����d�����L��3�����q#F���'b�������D��3EEEf��?>�;�������Q\\�`���={6
q��-��9#G���e������#�H�y�f�=�?�<���;�����m�k��&l7%%Ex<--
s�����7��[o��w�������b��%1bF��������'U�������������c��Y1bf����/V;���s��������,]�c�����c�a�TTT�o��
7n��[���;�`���5k233{����U�����9s����S�3g�������1c���/c��� ���c���d�����,[�S�L������R2��\�r��-Cqq��sW�\���O._����/��"�R):w���k�b���9r$����`��ef���a���k���j������u���g���I�&����������C\�|@eQ�l�2L�6
			pww�v_�����K|��������������g��(++�����\77��n���h��=.^����dee�C�B!v���R,[�L�5++aaa��};������0��7�G��s��|��'>|8<==��Gl��C�����`0@&����������A��]Q\\���H4l�S�N�Ed��b�1p��e@&L�yyy��y���h����n��i�\������h����6l� <>c�@��NDD���	�k���F�0g�����eK""JJJ"�DB�6m7���M7n%$$;v��u����T*)33S�t�I�RZ�v-�9s���'���#FP�.]�d2	c�����?��
'"���L@��o'"��3g���?
�=r��c���k��F������96l t��m""�>}:���
�/X����������?h���<&�1��3��1�2h� ��r�
4@vv�Co'**J�{��
!��n6���k��!C�@*��m#))	z���U�V�~�:�_����dt�������3l����.]����Ok��9�4i��g��x>���z
�D�����z���}����}{����WWW�<y@�G�������&M��}�M��m�7n`��9HLLDDD`���5�/������1�����e��\.���4��w�T
�J�B!��d�*��_�~�XL&rrr�����Z...f��[�U���[B!u/___���?��w�������F��Wnn.\��{c���L��71b����X���-���?���y��=r�����gcv�h4�`0���JKK��/))xxx����Z�Dd�UVVf��{�"V���U����]�X
�j��������I���=vW��)Sp��e\�|������;<��3��M�X���1f7�j5�O���7�\}			f�_�x����J�B�-@DUn�|����.�Z�h�.��HNNF���k����[W��y���X�xq�b8u�T�3y����3���OJJBrr2��'���o��e��������t�]��x�d�����(
�Y��o�����1g����=�v�0��;��jq��1|����0a <<����o��3g�����F��=�t���z���'#''G���������
������v�M���{���O?��[��c�,Z����_/^���s������\��7�;w��k�j������������l���{x��gq�����!55�v�B������\��2��� c�&(
U���A������z�j���O����s�=�Y�f�c��P�T*��}��h�\s�����
***��>�_}��j5"##��_?��7@���{��A�����G�T*�?�'O��999!((J������m[����F����?n�������A�5������b�
|��h��!^{�5�����6m���yT^�WWW@�^��}�v���~~~���g�4k���1����AAAprr�3���h��
�����g��}�����\]]
�N��~�����1fYz���c���z�P��6��"�\~�>9�V���^M��z�d2���q����C��C.�����K���Y?�l��1�c�?f�1�s0\2�c�9.c�1���1�c�@�c�1� c�1�����1�c��p�c�1�`�d�1�s0\2�c�9.c�1���1�c�@�c�1� c�1�����1�c��p�c�1�`�teee(**��kVTT ??F����k-w��a���u�z=����l��������jGII	JJJ�c�������a�(((��5k��eK�1���W�^��g�������� 8t�bA@�����J� %�(z�(bb,Q_�I,�,Q#!����1�j$���5pr4X"QT$*�b�"M�3��/��@,�g��k.�k=�Yk��{?�m��YYYh��%F�//�F}N�:���CJJ�oUm��=ooo�������w+��b���X�~=�^�
HMM���;q��]���"<<]�v���?�������o���������x���+(..���9���0h� �?���*bbb��h���d�T*�����W_���q�>o���-[�`������G�e�^�<��������=z�W�^:�$�����������pww������U�z��}�6���k���A�R�M�6x�����_?�]zz:�m���3g�M�6M9u��R��x�bQ�������N�:=����������G��N7n���/��C�����N~QQ>��3\�z-Z���	�+�C$$~O�����'O���
��/��#Gt��?WWW���ppp@ZZ|||���_���k�O>����KHH���qqq���.]�{{{�9s����k��g�������3f����p��y\�p�|�	jjj0x�`��5r�fffX�r%���PQQ���F�ADD���q���F���y3v�����Ldff"--
��9z���Q�FA�R	�999������C���h4�Z�
:t��m�t��^�NNN��i�r9,--q��9 88%%%�mFF��������'�Z�Ftt4�=*����r�Jt��.|.���l��={�DTT.\������OOO|��Wh��-n��	���@m%$��P�O��S�hii���w�������������B�V+��9�NNN:�yyy��� IN�2�:6�����T���\o�Z���|�������9CLIIi������hXPP�������UUU:�\�lY������Q�EEEB]kjj�T*y��!��������&Ml�Z-�����o_!-$$����waa!���,����EEE$���8`qq1I���>�B�lw��I<w������J��c�����(zxx������mk�t�������I��g0&&�d�u���f���y��%�mUU_{�5��r�<yRH�;�����[���455eHH��v��!�O?��h��DYYp����t�V�E���?�s
�/���4h�`���G�FS�muu5sssY^^�\�@����,))i0���������'~7��j6�N7nd����L===n��A�f���tvv�m���\�~}��y��B2��)ZB��D�"�_��;w��d�P�������~��(���������}��%v���2��FFF�����I�D0''�#F����!
�������R������m������E�h4:;;s��$���L8�T(����h6E9r�J��III���'AH�������hoo�.�eff200�zzz���466���E�x������%������d�.](��ihhH�R���R����M�6T(��d`FF��3,,�3f���q��			$���d�������LHH�e3g�d��I�nnn������>�J%w��E�V��������r��r��`VV��������L&c�������U�V	p��i444�L&�R����3y��-~��w:����@G���������������3@�l��5_�u��g�|�M��*��NNN�������_�e�_�^$��"���[�����O�9jH���
~���$k��	���O�L�>}������R����WI�
���s��yT(@[[[=zT�[YY�)S�����
��zzz4h���'''FEE��3�|zyy	�m����-[8v�X��r`���y��-����;4h��������$!��_��a��P(�P(haa��K�
���7�T*�{�n�h�����$���7S�T2''�����9>|��$��*��fff\�n]�>~w��GWBB��#	�?)
EG��2  ����B���7{�����\�T*�_��&&&"���O777fffR�����444���kI��&M��������D!��V���sg��������F���={����������
@fVV�����O���144�yyy���a���9|�pa�~���{������F�����)�����_���2��������aII	g��EVUU�������\�p!��jK�\���Xj�Z���100�]�t� !!!���������?
Q���{�Q��R���[7FFF��r���

�}[�n�L&��#GH�������+�"��?>-,,�������[�������>
�$%%%��9#����/^L��%�j5���8o�<���3h``��{�y��QOO�%%%�y�&�!������+V�l�

���g��#F�l\���/ ��k��a���y��aVWWs��}l��-0==�d��������YXX���<8��Z����f��������$�����[7��k��%�,Y"���������E����`�6m�}�vVUU1--�����<y2����=z�o���u�KKK9u�T���177�Z���z����'o��M�V�#G��y�������$zxx066����$k�����&G��W�\!�:u�_}��x�
._����]k���IJ�M�����
�%K������/_&Y�v����";WWWA^�p��Q��i���sg����O�x�������o��&����"������?��	��G�:$��[��x��!��7�	��g��DB;t����g����LKK�����F�666�&���{��I���� DHBCCinn������'���j��6m[�h!�
����������e��-�R�����8e������EM�k�����a�u!k#@�����c�������J�"��0''�S�La�f���m�pn������W��������cjj*����	X�����|���hcc��${�����P��o����455��$���?
d�VKSSS���["?���"?M�J��#G���3����$k#�x��i!����+V����7y��i`bb��Gdd$�w�N�V����R�<y�$�o��>|8-Z�>}�P�P�����U�3y��� IDAT#	@��)�^T_B�������=���8p��0����;'''����+n���t����III����7����V�~������w����;T*��������|���;w�|�������:��j�����MLL��+������<���&&&����8x� 6m����b������������������

Bpp0,--<��.]�\.��!��p��ua����+�7o.��������X�v�����*L�8III8z�(���,���X�l������\�vM�BD�V���{�sUW������a������6m�/\��5k� 99���M�����������">>^A,��QSS���V+��������$�Z�`��y�����[o���u������
@����`����?6n�(J����p_���������<����[�n�}�t��v�����1�cnn���t@ZZ��7XZZ��w�P;�	���+����%|�����g��7)���>��c!���_�[o���/>��F�3�����WI���e�H�����s 	�� ���5j���q��)�~�\����}
���myy�h�>GGG8::������7nv�����			())��q��~)��r���v�������Y��"::Z�n���H�':M��d��*�
}��Evv6�L�www���s����	��=���x8p��M�Z����K1o�<��X
�F����o�������MMME����033����(���C�EYY�������S���[�o��&"""����R���?�����j�:�����8x� ���i����>������&L�����P{����q��m�m��A�/��2�-[&l�h�BG�:::���������G������>�����i�������rt���I����TWW��gbb"s�g�Fxx8����������}���add$J377��mff&���f����3���<u����
������@����S���5x������:uB�N�Di�?/���[,��d2�1o�����D�e�R�y�<-P�������2O?��i�N�?I�������#PXX�3g��e��������Q�������~�,X���;7XVxx8���������x���K����� �,,,�X�f��!##�i�q��9$&&���B��Y�Dv
�������DUUV�\� 44T'B*��`oo___�[���������
��
4���8v�������hL�0[�n��&���
�B4���w����9���Gc���x���Eyw��AZZ����a�Q��o��?�yyy
�������3d��]�����8q�N�Z��7�|���@���nnn�����9s������2�C�m��G���������^$����x\���:����[:v����}Z�n
{{{��r�3
@�����@�=�xD����I���x��}�o�^����B�nX�x1��k�T�_������C~~�(]��@.����y�������JH�������r�J\�~������CCC�)���t�=z@�P`���}�l������]�t���'�����"""B�����L&����Ga����}��Q�yt��/��������}M��g�.]*�+
�;$���d2��Y���������us��it����������;�@�R���Mu���1���!66�*�
2��k��h�w��F�>x����x���t�P����>�166��m�Me�����s���,����������������+V�@JJ
������IRR�,Y������4����O?�T�g��}O}\���8v��IW�T���/t����q��MQ=�r9z��	�Bwww�����SSS���E�Bt���8q�������f��a��]BZJJ
��������={B___����i�������Cdff>��?���8p ���������{��W���� !!�0R�O�����'O233QXX���G���GGG|���ptt����u�_�z5����"**
EEE�����={���'D,--�9s���_���;RSS���8����p,Z����>|����cG��?�g�FZZ�����������;������J��&M���#���?#??aaaHJJB||<1a�\�|���������;���%4Qw��;w��\.���S�|�r������c��Eee%bbb��k�z�����t��M����@LL�����OLLL��p��a����`bb���R|���x���1g����{�9s&�|�M���a������4���g��U(**���g���///��;�i'�9��kF�
���>������J���={���
�O?�s������1`�!55�O����c������3}�t����[�z�h>/o��&?~�����j���wGXX������k����1c�n}����P���+��e6n�(�>|8���acckkk|��������k��\_[[[,\��/FFFZ�j��[�"((/������b�
,X�YYY��������/���f���{�����Y�`����j�u�V����P+2---�v�Z���������8u�.]��c��5�8%$$����G�$(w��AEE�J%<==���
�R	�R�^�z���j����B��???���M�7o��L&��e�����V�Z	�.yyy!((�n�Bff&����a�����}{ae��;}���BG�M�6	v$a``�����i�Z#((H����h`ii��� ���Z�F�6m��Oa�����EVV���???TUUA&�a��Q5j���������j���`��5B�'___���������C�6m�Z���4�Ddd$V�X!4=�T*����R]�r��5��	�c������>:t�P�u���?t|���iii033��~������������7���������"L�0022B�����ysh4�h�P�qtt����N]��o77�z�MUU������9P�����h���022BDD� �

`ff�1c����?�w��^�za���077Gaa!*++����U�Va������V����	akk+�t��A$.��Z�F@@@�}���C�Euu5���1w�\��������;��sssTVV����5kN�:��w�b��qX�|9�5�}o��������C\�|X�f
BBB����E]��%K���kkk�����f�����[�Z����������{#33>Dxx8V�^-�������VAi��-6n�???��0@�'R�����AAA
�1LOO�����}��3��_?4o�666Gii)�����������-�2����%$~+���%$$$��K�.���K����(�X������e			��Ej����xFrrr����
Bzz:>�����k������S#E%$$$����l����������0}����X����������������R�����������/���Bg������=��`%���i)))�wE���R�PTT�����\g�<�����2a9?			�?R�����b|������F�����Ml��YYYh��%F�//�F}N�:���CJJ�oUm��=ooo����;������7c����z�* 55;w����wakk���p�t)���GHH�������c����Tn||<^y����EEE4h��?�W_}111�h4���Crr2T*������������eF������W����������z���'���$$$����044���;F�-�g�8�o���_���4�T*�i�/����6s�����mf��)�P�iQ�TX�x�(����������Q�������B���3g����_���<�k�����Y��nN�����m���8q�S-�'!!�d�����'O���
��/�wB�������?�����|���@m��DGG��O>ya��	������G\�t	���8s����E����>���!rss1f��V������p�>�����`����5k�r9����r�Jxyy�V=��n����h\�|���6o���;w
s�������?G���1j�(���999������C���h4�Z�
:t��m�t��^�NNN��i�r9,--q��9 88XX����;::ZXB�YQ��������G���x�"V�\�.]�`��������������{{{|�������?.����222`gg�c���������k.!�7�N�:EKKK�������|��Wul\\\8d�j�Z!m���trr�����cEEIr��)������h4����Z�~�z��j������F�~��`JJJ�}=^�F����'���������I��e�������eee��-**�ZSSC�R�C���������4i�`��j�����}�
i!!!<xp��Y]]�`�%%%,**"I������I�}�
�`�s�N��s�����T��;t|GEE����FFF��m[���S�N?~�N��={�111$k����7[�n�K�.�l�����k�Q.�����Bz�q-^�X��JLL���)CBB��C����F��$�����W��k�Z.Z��x��y�ch�~	

��A����<j4�zm�����������:����dIII����������<��A�V����I�������1�{�=!M������ Y{_7o��.�������G7���/��qqL��!��q��u��s�$��Z��'O�����E����>MLL��K�.�s����d422bDD'M�$�9991b


�P(hff�w�}�Z��UUUl��-�/_.*G������, Ifffr���400�B������G�)���#T*�LJJ���=B*..�VVV@{{{^�pA�/33��������\.���1.\(�7n���-�����?�$�t�B�\NCCC*�J�����Om���B��L&c@@322�aaa�1c��GLHH I&''����������fBB�p-��9s&;v�H�tss���>����T*�k�.j�Z��?����������0`������<x0e2�7oN___�Z�J���M���!e2�J%g���[�n�����9�:"'55����<�<����Y�d����������8���o��U�Ttrr�����w�~�,{���"������r����~��QC��}n��O?%Y+�&L�@}}}�d2�����}��J%�^�J�V<�����B� �����������JN�2�FFFT(�����A����-�8991**JT�������K�n��
�l���c�R.��w��[�n	6w����A��|&%%	����+�
F�BA�BA.]�Tx�n��A�R���w�E�tuu%In���J��999�����
&%%���EDD>�����w�1//Od&�7~W ��c����
�����"����j���`!����=z�`nn.U*��_O�������333��jy��	r���$�I�&���YTVbb��P�����3����{��Q��p��=���c���-7U`pp0���X\\�>}�����������cNN;w������������wgNN5
���O�L�����d���d���gMM
KJJ8k�,����������������YXXH�V���X��r���R��2//��������			a�N���Q�����{���J�n��122�d��fPP����u+e2�9B����f��]9d������iaa!����D�n�Z�eee|����P(XXX�`$)))�x��Q=<<�x�b�|.�V�iff�y���$g��A�H����7�zzz,))���7	�6lh������+V�l�

���g��#F�l\���/ ��k��a���y��aVWWs��}l��-0==�d��������YXX���<8��Z����f��������$�����[7��k��%�,Y"���������E����`�6m�}�vVUU1--�����<y2����=z�o���u�KKK9u�T���177�Z���z����'o��M�V�#G��y�������$zxx066����$k�����&GI����l��5�z��m���hgg����7��E�S$�'��p��%455����I���`||�����U�.\ Ah�1m�4v���$����/^�(�����������	�?����Ghh(���O�i�����C�	i���#^�rEH{��7E�����:��:p���$�#,�������jA�������	�w���&E��x�"���P����4)���	B�q�Z-�M��-Z���!!!"�~��m!BY��e��T*�m;;;N�2Ed3}�tQ���kihhXo]������ F�X�x1=<<�R�H>������)S��Y3!j['�#&&�x��U��;|�p�����
���l�j��q�_z�%������$�=z044T��o��#MMM�(3I����BY�����TG�����4E*�J�9Rd3f�������T���������X��7o�����	����"������;�Zq
@����9s������[s��e�=�8j���F�b����JX�P$(�7EZ
�/JMM
f�����X8p@]z����������7n�P�~)P;X ))I��y�&222��j��_?888`���pww�J����{1�|��;v`�������\��}[��Z�111����	^y�a��Q�&&& mibb"������b��M(..Fii)


P^^������|||��� �����sz��%��r���2���_F������/))���g�v�Z�UUU�8q"���p��Q���7X���


�l�2��w%%%�v��0��Z���{�tFL���6��qRRR0l�0���`��MB���f�$''C__��� 66����4[[[���#��r9jjj���j��e���$�Z�o���[o����zK���[7<xVVVjG(�l����q�FQZ�N������~����������u���K��[�t�yc<����������4�o��������v ���W4j9++K�������O���I�'�o��>}����"����=7n������T*��|�F�y��y��x�?�������7�`c��?{0r$�������x$�����F�Bzz:N�:%��������P��B�����-//�����GGGTVV��������������#!!%%%7n��_�\�3�`��]��{w�:k�ZDGG��[�	��D��i2�L�_�R�o�������)S���CCC�;wN�111���g�`��iP��X�t)����Sk���h4:s�M�<vvv�����(?11fff����`���(++Crr2u�}�u����7�DDD|||�T*q��a�G���Z�V��>�����c1m�4|���Re����0a�0-Lff&��k������o�m��
�}����l�2a�E�:B���Duu�H<?JFF���aoo/LO���x������?�� @uuu�y&&&"17{�l������`mm-����FFF�4sss�fff���kV^^.<C�O�S�]ZZ�����Y�D��>;u/Q
]�G����S't��I���s�4L�:U�������w�zegg#88fffHNNF��-����x�9OKJ����l���V+N++�a�O��0���Q]]�#F���g����b��.((���_����|�����s�e���#**
)))����K/����[��V��:�'��Y�f���h�A>���D�;w�������Y�Dv
�������DUUV�\� 44T'B*��`oo___�[����������~-))��A�`nn�c����������	�u�V!��������P(�y����{O�}���=�������.��s��������
6���~�m|�����	 IDAT����k�������}�!C�`���������u��j5���������pssC\\���So�o��]��d:th�e?����d����"hgg���������n���{������uk���C.���Q�t6j���#��?�O�n~�����}��"?�w������]����$n�����#<<\�<8�}�����: ''���pss�W_}%zym/z������k�JH�������r�J\�~�����������BSP�e��?�=z��B���;D�n����o��t�OOO���DDD�����
�L�����>������8�.���d�_~�%���Q�o�{�����K�|�B��c���0w�L&5+���b��}��[�na��9���q��qa��:�y��T*8p�I����=���<���B��@�RA&����Ctm5
������ <<������j�;u�G?�����m[����0����;w.���Ey����>}:����|�r!}��HII��9stV4IJJ��%K�&����������������������"�*�
_|���]zz:n��)��\.G��=�P(����S����ajj*4��h�B�����'N<U}�����Y3���KHKII���5��gO����<��6mB|||��>|����W�)--�o��3g��'`hh(�)S�]�Z�IHH4)�'���?���'���(,,����������#>��C8::b���:��^�x��W���"XZZb��=������������9s�������������E\\��gxx8-Z}}}>\H���#������g#--
NNN����M`�"����R���I�0r�H�������GXX���GGGL�0�/_���?*++�s�Nxyy	M�:t���;!��1u�T,_�>>>������cQYY���t����f=��:��yS$322wwwDFF��S�(6l>��3�������~�-�~�m��3���f���7�|aaa1bz�����a�����Y�j���p��Y�~����s��m��Nv�����G���pvv�����������w�F��=�a����O?���s��~0FFFHMM����1v�X|���:�L�>]��v����>����o����@��������/.>��L���{���?��OTWWc��53f��-���>�������@\�r[�l��������Gtt4lll`mm�o��>>>�v�Z��kkk��b������@�V��u�V���^���V�X� ++]�vZ������_b��Y�w���R�(]�v��9s0{�l|������#.]��'N`������Grr28��Ir�\��XBB���[�h�D����;����R����'\]]�T*�T*��W/���A�V���YH������������dX�l<<<��U+������

��[����	;;;l���:�o�^X��N�������FFF��Q}��M�I 00���BZ����

��5
,--$�_R��h��
���###�1���������7���������*�d2�5
�F�B~~>���Q]]����Y�F�g������R����_�~h��
"""�V����������+��]�Jai�+W��Y�f�0a�pL���G���N�����������`!--
fff���1x�`������^^^������7���QTT�	& ""FFF���?�7o�F�-Z`��j#3��������K������V�����B�~���1����={�-q�FFF���yAA���0f�|����.��W/L�8���(,,Dee%<<<�j�*��9S�OT�����������}:t� �OB�V#  ����@mw��C��������;w.�����W_��w����9*++����5�N����w1n�8,_�������k�aaax��!._��Y�!!!BYu����y��%������5����^3Q��Z�F������
@��������"<<�W������������m�7n������ ���h`cc�����4}��AYYrss������W#44@���&&&h�����jii����'^��LX[�
�����#�,!!!��\�t	�.]�DEEa��(--��F(KHHH</R�����3������H����4h��������^�������)(!!!�$$$`���������-�����F�KHHH������������R��������������QZZ*,y�{S^^���/�����������jaNJ			�?#R�����rl������2T�SXX��?���wM�$�m�6dggc���/��M&..W�\���+���j���e�����E���7���NNN��m��^���q��u�����i9�&M�L&��-[���;q�<x����#""B47�Z���]�p��Y<|����CDD�o������eF�-�")22���h���0�sbb">���<t���&Mjt]���|����4kkk899a���:���/����{������h4h��-����{�[FUU�����<������1j�(����,Y///����M>?�q��-|��:t�05�����>���W��EL�0��i�$$$�J�i8}�4;v�H}}}�9�^�#G�������5�o��$����|������O++�Y�&����#FP__�zzz��dee1  ����l����\��>>>����C��r�
��;���cgg���x���U��������s��ytvvf�.]XZZJ����app0�����op��455ehh�oV�������m�&�gee��[���2��{���f��q����?>���ill��W�6���E���>}�����1��o���T�V��r��y������+�L��3f����8v�XVVV���?�455eXXg�����G����������E����|���_��cLL---	�6l��������#]\\�`�:���7�����R���D-������Jsss����/��'N����5���GWW�&��-[r��9|��w�I�T*���S��4hS]]����z����8y�d~���
@�VK���;|����I����������277��c ���BVWW7jSPP@�Z�`~MM


�R����������V����+��GyZ�����~��a��?-555���o��4��W�R.�3//�<�B������������1**�$y��!`BB�`���H<v����']���2A\6DTT=<<hdd$�)))���d���{�(�����
6����������E�Q___'���{tpp`PP���v�Z��L�V+���o
8e�!-''�666���a^^��>??�����������*��(�q�Fv����������WN�<����,))��N�����?w���/��q��C����-������o���$���v�����4����$�e��=�LKKc��}i``�����e������E6g��e�^�h``@}}}:88p������a��`������T0%%��z�"�d2���2--M�ONN����y�������U���#G�P�T���3tqq!���q�������_~�v���_����kG400���?��/g��=�}�v*�J���97m�$:���(ZZZ�y����d���E�jjj8o�<���R�PP.��O�>�v��`���{w~������%��sG8GGG�|///^�t�$�n�:�j�JG��������<��i���K/	y���+�
F�BA�BA.]��Z�����T*�����o���J�Rt^233�T*����G�G�$�����r��999�:�;��{�&Y{_[ZZ�����;s��Y���'���,\���`�����c�v�(��(�����,������Jccc�?�������c�������������y��a�|8�C���]GC�$g��Msss�dUU���8f��}-X��zzz���"I�����������o�k���M���[B��`JJ
w��������$y��9���>�R�hff�u��5X����P6|�$$��H�@�$888�c����t�����O��nI������
���~��W���#::S�N�w�}���������S'������3f����c��O?5�|�L�Lu,))��!C������\���������F���0d��k�EEE(//��5k���o��/**��y��J�O>�_|���M�{�.>��$$$���.���+q��U@MM
�]����8���������
�O����L���[�p�B�_����(,,���-�������n�:���������I:j�������_|������g���������1k�,���������FYY�����C�D�bcc���+++TTT�3Hb���w�~��TTT >>QQQ��i�����EQ_�c��A�T"!!AH�������� ������R��-[����sg���	��n�G�����;w��9		AHH���������<�����71~�x�?���(--E�^�0f�TUU	>�j5"""0w�\���F�����|���077��/�����.\�o{ZJJJ?��


0r���G�
�F�'N�=�^�z����^{;;;������cM���'��'''@��~
-	��_�����gO���3f���+������HHH4�?Z�J��P�Q�&X��Dw��A�u��(=  �#F� I�Y����5�iJtqq���Su|6|���n���zzz�f���tFEE����111��d����������`����G	�_���_SS#�Nl������[���>���K���g�����������H������;�6��;w�$Iv������"������$�M�������]�6l�0zyy��rrr(�������'\_�6$����A��}V011Q�322���w'I��5����������G���Jh�~��W�VUU���y��	���������s|�1P�Rq��}��g�
��������p�K�,�����Y>99��V��Z�faa!���D��~��x��E!m�������<$��|���:���A���k�F�,������]��P(�`���QK�p�B��JKK)���h�"��]B����=I���k433�_d�:���<y���o_>��-b�>}�P(x���Z~S�"�W���$�c���"�'OFjj*�5k�M�6�lKJJ���HMM���1��}��V���7^h�o�.Db�r9^{�5������666����3����6��T*agg'�������O���=�����LLLPVV��M��Gm�����kW�M�6mp��m@FF����;,�k���i�6( �E0T����AP�FEP���L��h"������%�Dc�*�X@5c#1�E�Wf���1/�r�`�7�7��<����:{������{������b����hjjB^^�L���qqq��bp�.^����0x�`���gg*�z^FFFl|������;w����={6���add�}�����������qp��A�<y�������2d�l�������"77�'O�G}���l���#==����~������S+�
�e����:|||0|�p���[����_|{{{*O���$	��������T�n��a�����>*++Q\\���833+W���K�8�����R�5
���HMMU�i�,r�����D��3g������
��z�J%���
��6����+R�5?W777�_���BCC�������~��:6b#jQ��z�(E)~���)A	
P���df0�;��p0����g}xx^&����7�;�<y2�J%�b1***8�}��aS8J�"�HM'  @�����]�����X���8���C[[[�5555���W�����w���x���144T�����a������~��B!tuuQSS���z(
�>7��/--
�v�b�666�|v�M�TB�T�=���(8::�����g���#������{��8+�J��E���L��{wt������`�|�2�����o_����H$(..Fpp0�����CGG��0+//Wk������l���9���W�������#v�����VO}}=�k\�t	�
���F����O�:@�/::�'O���>��W*�())��`kk����w�"$$vvv�p��R�F�f�b1~��Gv����f�v�����)�Ks_�u�������{��1��p��u��5
��{����zqhF `����9s&�|�P�*T���=o����wG9�PB�����h��v#G��t���x����n666P(��a�Fo���
����y����Edff�����QRR�Y&��Q[[�T
�y3���4��3TUUA&�q����2�����Aaa!��������044�����N�����%K�`��%���Y����
:w����j^Cy�������]���[�m�OBB���Z�������?��;��w����HOO�X,F�^���S'*p��	��W^y���(((���5�ggg��������rbG��rdff">>���r��YN�d2`ll�u������N�b�[z5>|������`��u�z�������3������!((AAA��eK�����{�nS���VVV��kg�Z�w�^hkk���^6|�rrr�w�%��������������j
��}�4�����]`�.] �PRR��755A(�+��B���Gq[�Gq������W�o��������;�G����B�����9�O>��-���EQQ��9���J%����(������z=��?����)rss������F������>}�M��H���c�YYY())a�n���q��q�����V^^^HII�<�����b�M����/N�<���2&+++��9s���#&�����S��s�N���h�,�����|�
G�����d�������0`�x��yddd $$�jS��7�@�	�����HMMExx8�����7o�d:[�nEEE�r����������3��+W��������d���`�B����k�&w������9����c���!66������/���q[��b,^�'O���e�������5k0w�\��f��i���Bxx8���9�%%%���9f����~��7��]�x8;;��CCC<[�ne�������?�2��o��]�����ZZZ��3~~~��a��?@������t��gOl��
			������.������������k���0u��V���W/��1qqq�z�*�������		�o��7�x���9r��������N�h�"�5yJ���������~�/_���+7o�Dyy9�2d&M��j������X�={b���m����aoo{{{DDD ::���022���;all���E`ff����������Crr2���1t�P�l
'N������b�[�3f�`����D������^���#q��=|���HHH�C;�[2�|���������
���cLLL8�(��Y��=�����>��#,X�yyy���'����g�<x��c���D��uss�P(��c�����Py����9k-����j�*L�:�n����%RRR������8��M�X�!C� **
>��}�����<���eBCC���:l��K�.����
���8������)))���EFF���ajj��2Q�<y������?3X�144d�N�S�LAqq1�.]��;w"  b������������l�2�/�Jq��	�5
=z�@pp0,--Q\\�S�N���'O����)�����s�f�����;���,��{��^���K|�����y��U���C�~��.�����+�����-^�x�_�	������ IDATG����T*������ �J!�J�Ynn.d2�R)<==����t���SSS455����
�H$BUU<x�T
GGG����k�|3

���;��������
�7of�F�c��A�n�p��-���[�laS�uuu�w�[d��_?�~��]������J���C*����	^^^L���	NNN�8z�h����5M����7o+5j������\2�����������>��������khh@���agg�R	�T������F0VVV8w����p��dff���;Drr2&&&����R����7�T*��{�!>>�M�ZXX`���x��)~��g���`������e��d2������U���d2����3vzzz����D"ANN0v�X�Z����J$�������:u*��=;88���������������������&077�B���#8�;�R)\\\�P��W�����s-���D~~>����5k���v�@���0���~���	���8nD"&L���s������@5���G��}2����=z4^}�UTTT�k���N�744   ��]#"H$������011A�.]�g��033S����\.���5���4>�����c������Gii)�r9|}}�f�����m�077��i�������jTUU����f���������OgggXYY����m��m��[� � �J��_?t����G@@���aff���H��� ??�����m���?�tDG����}�������d�l������������;��������������<</+++��T��3�0�?�����������7yx���������O���e����i�e!��X��#������:�[�]�tAddd�:���X�~=^}�U�1�]�v�����|�G�?����q��M|��'��+�J|��())�du�����GGG����?�vll,�����}O�4	����KKK����QZZ�W_}111,3�
��o�>\�|���@LLL�2G�Q����/��aaa���'N���;agg���edd���cx��	�u��I�&�K�.�NKKc�6����#1d��V���}��;w���[[[����y744�������PZZ�:���c��QKY����������M\�|��'O`gg�	&��z�t�RRRPXX[[[���>Wj:�v@<.^�H$�Hh������8q����H,���3�Uo~~>���k$�H�C�/�������F�I��D"Q�:yyy4`��H$dmm���>>>���}��M�r��Ko�s���g�""����H"�PTT��?��������jjj��H�PPHH������3i��YdhhH�F�zi�S(���Ch��L���Gh��5T[[KDDK�,!�XL��{����;����/�����E��P(��c����������k������t�J%��?�D"������Si��Y���M(""��>}������dccC���4v�X�3g������YYY�?���744�y����q[�f
�D"7n-\�����G������tF�M, OOO�����|�xx�I������l266����SPPP������c��t��aruum�haaAs����������TRRBMMMud2����Z���HS�L����V
@�RI&&&���4o��?dVUUQ]]�����j*..n������I&���SVVFr�\c�B����2jll�X^\\L�������T\\����<�XZZ�v����m���(
*))is\��/��BB���<yB������C+V�`�eeedddD�~�)}��w�cDdddJMMU����qmm-3.5����������q���,@�.]""����P(����3���z���3�����_�hI$5��������
�d�V�"�~�zR*�������M�:���������|||���'������%SSS*((`�a������>-Y�������w��4p�@"R}����i���LG.�����������(�o����4���R�
��	<��w�i4��W�G�y�3g����K�����!���"mmm������7st._�L������E��lllh������n�:�`������\`VVyzz���K999����K���AH[[�:t�@�}�+?q�I�R��������H$�)S�������������sdggGHKK��#�l�2���}���$�J	Srr2�>?��S255%mmm��wo�gC�P��������tttH(R�����_e:�1_}�UZ�|9���z��!�{{{������q��T��N�:�?��m#333*--�����k����


h������C:::dbbB�/&�RI���$�J����L��o�%�T������T*e��z�j��������B!q�A}��%"�����Tmz��As��a��O�&'''��CBB8F��������	rrrb���dgg���>]�z�������7���� CCC���UVV��c�����S����i��aju7��$"���'ccc""jhh�:��q�4��`��D���GDD|�I$z��A��������E���.������E����x���S}}=�={����8�������+��������c��%???���L�!j�Q�����o��`cc�6uz������8������Gpp0���PPP���:$%%a��i8~�8���'2d�w�������a��Y�����k����@ �C}������C������b������:���PZZ��C����������+1o�<�����SQQ����c��=hll��
�u�VdddP�M|������L����O>�/��P(���_�{�ndgg��������3p��}��_~��b������Cyy9���0t�PTWW����f����uuux��1���
�\.o��<}�[�nE@@._�333�:u
����3g���PQQ�.]� $$���9r$�����w�q���s'�����C�����D��#G�����}�6����g�|���HNNF���ann�Y����
�T���4&k.o��������P���J�������G����ac/��5733������1bF����
��wO�<a�l��7DEE!**
2�555�����q�������������;���>}�p�3f��}��!���������011az������_�QUUU��k������G������&�>}�*?���'K��,�;w����s��=}�4���4EEE���E@@'�������������>BBB8�����p���?<^<<<��-Pu4y[�<�f����7������@#G�$"��+W�X,V�Jtvv�i����������~���$�8��[�n���~J����m�6���������BBB�������8��
�D"Z�n��������t
	�v.^�������Nee%I$JLL$""???���������KDD��i��<{�,�����L�����������#+**"�@��`��H�

�l|bbb�����TkVPFF���'����JDDs�����@V���D�W��:�i����3VCC���������h������v���'���H�&t��eV^XXH����~?��C���gZ���K��g��\.���r:{�,��y��y@?���%$$���������S�?���Z���/88�z�������<�����o�>������k	�2335�USSC��-ZDD�%!QQQ�������������5����;���������.]�q��\.�1c�P��]�������<�@��U�_�y�q|��7��"S�LAvv6�b1���9�UUUx��	 ;;�������9:J�w��}�}�����'F("..999033�x�����{�Pyu�R):w������/^��Z&��D000@mm�F���-uD"z������������CDD�NGGG��b������	yyy�2e
G�������)�x�"�B����aoo�����y��iFGGw��DFFb������������>�r��
���q��I&���c���!��ePZZ���\L�<}����������t$&&��!"�����;L�*h�	�������������[oAKK_|����!�<��H$�ooox{{�R)�u���[��������Dqq1�?���L�\��.]���^JKK1j�(���#55Um�����r��f$	f���v���I�B���R	"b�B��M��k��_����_�5������@�NUU���p��]�:u
R��]uo�<��}.JK��~��yI	PP��4��pw��m�����L����Tx�7nd?v0y�d(�J��bTTTpt���CCC�	�H���ft�Y��]���J�X,F\\������������������������j<����ME����xp���j?�B�������A}}=
�Z���[������]�v�sf6?�f�J%�J���������#`����={6�9���H���������R����klld�����{��hlld���������}�����������D"Aqq1��������:::T�Yyy�Z�%%%066f�}���^��/_F���s�N$$$�z�����[���K4h���0j�(6}|��)��~����<y2������R�DII	<x[[[����{!!!������e���b�����\OOO�����+�_S����6�Q������{��=W�������7�1h� �=O�6����;v,=z�>g���		���.]����-�������QU��(�J%WV[<�5zn�cE�K�7yZ��fcc�B�
6h�v���@KK�7o~�]Dff�����%%%�5`r�����J����Aqq��7���H���?CUUd2�X*++cJr�i�lkkCCC�����@���%K�d��V��������;wn�y5��<p�����k��a�����6�'!!vvv��hii���������w�Z������C,�W�^����8a�v�+�����r��������9��uuu9�#�r9233@�Y9{�,�o2�


066��u�`ii�S�N���-��>DNNrrr�n�:N=��������g�Y������ l�����C�@���{�����+++����3n-��w/�����k��
�>�999������|���x�����O@�p��}�{�����j;v���022be���M����#t��
EEE

���v������/:������-�yx���7���QXX���L�������B�P`����'�|��������g��a�J�����K���C}}=����|�����"77^^^hll�lj�������������p��1v���������PM9?~��� %%�����RRR8�/%%b�Xm��������'O��������0g�<z���"##q��)���...=K}���D"�7�|��'''s6�����Gzz:@e�?l��'Op��
�PM!���s�������"<<��������7o2��[�����M����w����������+aii���z�d2XXX0�O�P`���T��w�REM������c�F���www|��h��-�b1/^��'Ob��ej����c��5�;w.��M�6
VVVG~~>G������077�������y�����kggg���`�������8��>}�����l��������={������i?��o��]�����ZZZ��3~~~��a��?@������t��gOl��
			������.������������k���0u��V���W/��1qqq�z�*�������		�o��7�x���9r��������N�h�"�5yJ���������~�/_���+7o�Dyy9�2d&M��j������X�={b���m����aoo{{{DDD ::���022���;all���E`ff����������Crr2���1t�P�l
'N������b�[�3f�`����D������^���#q��=|���HHH�C;�[2�|���������
���cLLL8�(��Y��=�����>��#,X�yyy���'����g�<x��c���D��uss�P(��c�����P������FFFX�j�N��[�n���)))���@\\���X,��!C��b��}HLLd����l��������D]]�o���K�BOO��
C\\���aaa������"##���055e�����'O���s���gk3���,���a��)(..���K�s�N@,#33����>}:�-[���R)N�8�Q�F�G����%���q��)XXX���������~����={��c���g��=1w�\���#==�q�N�>��k�B"����K8r�|||�<�B�{���������,���/��;�<z�����J��������R)�R)�"����L&�T*���'\\\�N��}ajj���&XZZb��A�D����� �J���v���#���5�)44����{�.>|777l���M	�3��u��[�PXXoool�����������{l�}�~�X�]�v���*++����T
'''xyy1'''�4Q-=z4��������x��7���5
���Enn.����dt����c������ ����������A�TB*��1m��������;���,\�p����s�������	����T*q��M(�J���{���g��?~<�>}���:::X�t)bccY�2����puu�8.2�}�������bbb �H�������;�V��xZD"���akk��S���7����<==�<k�����>���[[[l���m�sss(
�1���C*����QQQ
��z�*<<<8�@���1p�@������
���X�f
�j���~��7��� ))���F$a��	011��;w`hh�?�QQQTS�=z�����!�����G��W_EEE�v���|CC`ii	@���H$

���1jkkabb�.]���t�aff�����\kkki|�-�������������R��r���b��5���U��ann�i�����	���������f��������iy�������R;�\�����`������(..���3V�X�Q�FPm�100�����x����{<^$��#������>0�`��%��i[��������w�_����������7yx^VVV�]�<<<<<<g�)`�����������o��%TTT@&��G�l�������,cO{illDUU�_�
��0��r��|��w�5k�Z�7"��C�p��Y������qqqm��j�K����1��`��I�������y��[�_}�U��[Z����8t�JJJ��kWDFF�AC�T"Bbb":v���/���8q�����X>�f����{�n\�t	���pssc��U�����ZoBB����+V�@ii);�D���A�^�X8�g�3--
iii(,,���.����2�<��p�����������

U3s��-����g��dV�3=z/^��E�ZM�x��u|���())���+�N��cp���8w�'��/����;wb��	���<<</����L����c��AHJJRKQFD�0a���!��`ff��7�����m���CZZ'�G[$%%!00������3RSS������?��M��p�BN���R[[�7�|#F�@RR��I�P(0d���3B�FFF���O������z�@�III���aq����&�mo��{��e�999��y3����1c�p�(�����
CNN��������>���u��;��_�b�I�K IDAT����PSSS\�r@HH��v��=$%%��pE����8q"F����$6N-9|�0������������o������L���R����+������$��u�O�������GQQQH$t��N���g	9r��


H"���U�8�


���jjj"�RI(99Y����Z����S�������j5�������]u��r*))�X���H���T[[�f=2�������ihhx�{���TVVFr�\�Nee%566rd���R*���^{��dd``@�O����>}�vUUU���MIIIL�P(��W^���P����7244$0`@��744PEEE�e��#F����	=|��S�w�^��xvv6�o�����RRR����1
�t������������m������������,--���j�GB����9���w�&���@
��sMFF��#�����#t����������?�3�6o�L���'����z������x&{��1
<�233������b1���E�����222����O���p��v��=��y9� �F222�����\������c:z���O�.]��w�e��~�)������6YXX��C���o���z��AZZZ$��g��������G�������9�9r��R)��}������+���'mmm�����!m��P(����####@FFF��G����Fz�������tttH(R`` ��{��������w�!mmm@������?2Z�nM�<��b1 ggg������y��QPP}����:t�]�v1���8�����3gr~<SSS�G�$
IKK������!��)00����i/,,��������$�J������[�n�T*�'N������GO�>%"���=z4�?;;;��P[[Kj��q��w��YSS���Q||<��9S��p�yxx
�dnnN��o��4MG�i��������������X���T�/�X����&�������|�M"��qv���Vu�������8uhl{�����=���~K���o�h9F�c���V
�o���D"hl��������cGjjj""��|��)=y��oa!�aixx^����h2[#''��B!�������	%&&�\.���"2d	ffee�D"���L&���:�:u*Qaa!UVV���.�^�����q��O�>DD���F��V�^M
��*++i��1dee�<6�1�������N�:EO�>��G��D"�;vQBB���Rjj*)�Jz��1yyy���3�����N�:��e�����


��W^�����v�������6m�D�����o�Q�.]h����OFFF4r�H*,,���R���$---*..&"���rpp�7�x����{*--�{��������Q]]566����	���������M�����H��������lR*����@�'O��g��%daa���,���`VBnnn����R}}=}�������g�w�����|���X��������^���ddffFc�����jll��?��D"����jmh2[����l��]��LMM���[�q�FJLL�S�N�n=�@�\NFFF4�|""�5kiii��P�d���$�����~��7@�����_XXH�KL{�A�Q���[=F�����\�h�����'O(11�f��I7n���:������^>��7yx^2������,--%WWW


e^�w�}�:v����<�<�8e�����xV���IOO���YCD*c������������_������f��@ ��{�Q�@;;;�={6G�u�V:z�(u������8�'N��
>>>��gO���~H��������
���:u*���h����s	���2��G�8����I[[�3���G���UUU1�R�${{{���e��+W���%������%�|�cCCC�����3����wovMnn.	:x� +��d�l�2����3gR�~�����>������Cll�5�>}J.\���B&���&�hoI{
��R�.]h���L�d�����S'3f�H$�����i��f���SI,���#FP�������m����/���N���cm^chhHS�L!�;��&p��)dooO���4s�L�7oY[[S�=8�������y��_��B�"w�����C��K�����jwb��]!��������6;;B�,�����p��]@dd$^�u<x�������o�P(��022�{����CGGw��Q��/����/�sOOO8::"//...���8@CC


�v$6���������gO����1���8���<���1�������5������&���g���{�`gg###&pvvFnn.��������C�������b����,66�-�����l����8x� �����_����999 "�Xjii��>P��=z@[[2�_}����???(
���`���l\�EGG���������7o����mF���ms,5�������������L+++�����LMMEHH�o������;wb����������^�z�B!
E�}S*�LW(T��k�"�R�d��I�����p����x�������
6����������o��{�P{���=y�~jq^�@��p�m�h��w������<����#G��������dhkk�������r��=o�Q������x�
��������������{�b��a����C�T��GGG�>gff"))�����;���p������Z�����15���gu����h-����gW���!����������(..f�B��z�BFF^y���`nn�#F��/�Dll,���8;;N�>
sssf7a��Y�fPqqq;v,=z���D���"::���TUU������>:u����JxzzB*�"&&����H$8p�����G�EDD�O����D�D"V����f��g��8�|�`hh(�.]��������{{{=z2�L����w�]�ta���%�5���PWW�n��i��g8r����		iW=���044d��za������W�v�:99�������=�T��V�����,�|FV���������yx�,�������1t�P����_����p�?�����@OO�7o���D"Axx88��'"55�y������s�u�$::���AGGG-DFuu5$	:v�===�p8��/�GL%%%�s�R���J���i��������EEE�>_�p���������x����8p +�2e
���������.deiii4h3b�c�������������=����1q�D?O `��A��};���p��I���2#�%��u�w�}��7o�������e����^N�8���0�]�o���Z�B�@}}=��
��J$�6����z�������C�j�*������j�r��BPP������777���s��m�����6lX�m�d���j/R�t����������P__������---�6��,�}���(�-��_��%�8�<���zDDD`��)����������3����9:������8�������������,22W�^Err2�R)����������eee�3g=z������������B����7�O��@///����Z�OOII�����{����!==���={


����x���


p��&{��1�\�ooo*�eLL�������c�������d�

���=�z�-<y����ce�N�������������c���|���c��](//��Y���7�p�z��i���.\��)�c���0`�C��L&���LLLX+V��P(DCCC��������X�dI���5
����x�~��G��u����a���

�;���K�.q�d2f�����<,[���?��#deea������>��g���?DLLL�F�&N�:�k���z>|������BKK�6mb����+����B�@FF�7yxx���i�R���p*�
,X�������6m����7#//�o�FXX�z''',[��������1x�`�?��>ttt�5}�t����4i�k�.444��?duzzz���	����2e
��2�|���������
���cLLL���=KRR��b���8w�
��k���8p BCC1t�P��{�6m���\��Qlmm1o�<�9"�7n����Z����3�
��#0g���bl��NNN�1c�������/^X�x1RRR0s�Lf�	�B���a������eS�999(((��A�X����HJJ��Y�PZZ
�������������O�����#55���������g�i��v�Y

��%K///������=z�������^�za��i��}�>}:�����[7|�������PQQ���/�}������;����?F���������`TVVb������o�c�G��o����������	���HOO���O��~�����>��9�y����`������Cvv6.^����|���j���1C����+�`��U��gqq1f��
�+..���pvv��%K��S'$&&����FVV����{�n�
���+���C`` �����������W�fS�{���K�3���/"Z�����D���,H�RXYY���VVV�J����@���Q]]
kkk���C*�r�.]�������3f������C8::b����H$������
$	���`bb���TWW#447nTK�ekk�N�:��7�dS~�j�HLL$	rrr�����c�b��U�����B���������������2<z����/��+++DDD���?��3�����'�`�����Lwww�f���&����)U�L��}��{��LG.��K�.���k�o������EZZ._��{��!887nl��B!���aee�;w����c������������2\�~			l]�X,���=zWWWf�`�����a�>������!�#������(++���#�a��t������AMM
�<yWWW�^����J�\��]���T+++0yyy(--�o����z���������)q��MH�R�������}f;w���}������������g�k��pss���&M��R�G�A"�`���HLLls
X&��O�>j�~Z��s��-���FFF7n6o��k<==ccc�����������g�}���gs��T*a``{{{XYYq�n��q�����d���_!�Jakk???�����~s=��������c������������~�	VVVj���{eff???����g����/���dt<�k���yx�Kx���q����45���������N�������������]N�<�3g�`���s����������<<�%t��Y-�����D�������mn8��������������������o�������������<	�
�,���8�:x^<r��;���=466�������?
~
��w�p����;��5�;w��:��g����...���k5eY�k�B!���1u����}��I�����jV���u���W_�e����q��!����k�������C�R�������c�i�����8w�,X�b���	�w���K����777L�<��\+((����[�7!!����%+V�@ii);�D���A�^������}���!--
���������;�����S�V�x��8���466������j�!o���;v`���,[�������x�"-Z�j���������FII	�hjj��Y�t)��;�3g�.]�������������������=�<��d�?>
���$�[D�	& ::2�fff��q#���89x�9<Oj���$����;wFjj*\]]5�2��i�{�U�����:�AP@E@������_
b���T�{I4^klK�=�MbT�X��X��(b�t��)��s 1����u�b��g�={f�g�={�s������u���������o�����Fa�8Z�Z�F���1y�d��r(
,[����B\����#::IIIHII�4M�u����c��7))	�6m���
���2!oFF��W/$%%���
�+V�����l��S���+�����7B.�����/_F���"��=x����:�����B�5
Dtt��O���o���������kV��u��g��C�x�������,!!�;A	�j>|8;u��={�/_�,J?u�����---�\�z�(�R�dff&5
�Z-p���:u1??�7��������:�Q�F����Ve�T*feeU�^VV���������rJKK���[c�R���Y�R1''�*���<���,++�������I�V����
���433��c�D��/_�_AA���-��j5[�j�=z��}��}��[����s��+�J���U���[7����111�����w���s���q���w��$����/^���tqvv����u��w�&~��W$I�F��m��Q�F�~����EDDP.������}��m����S�V��9y�$�����}�
�C��^�����*:t��A�q��M���LQzII	mll8u�T����3v���?���`������>�^�*\s���U*]]]����&����m�l�K����#��D��<y����/_�R>{���y�4i��3g���/_NSSS����{����������


���G�={�d�C����O�<�s��ZXX���;$�����422�L&�����AZ�V�9{�l*
�B���%K����2N�>�fff466�\.g�N�x��=!O�N��p�B��1�FFF@{{{^�xQ������?��c�����>�E�LJJ��m���>����~�z�}������[�<>|8'L�@��s'I2>>�nnn���444���� <T*;u����0Q}������#O�:E�?^����L~���$��SSS�|��$�����
���Y3�e����'O��laaa����4
9u�TN�0AG�;w�m��!��rZ[[3&&F��B48p�J����#G������!W�\I����������:H��5����K�?�l���U�-++���Ee����_�u��g"�W��~�����_�>����;wV)���O===���U[I����
4�F�aQQ�9�S����E��gr������/�[B��D���:XIII������#I^�v�E�J���v���2�L����400��9sXZZ���b�;�
�����������	?��Q]aaa���!I&$$P&���O>�Z�f~~>
D[[[�cSMsss=z�/_����i``�-[��$���O���S�����g���g�-��hH����l��!/^�/^0--��Z�b��z�5k�F�q��
,))������I<���M�:�
�����czz:���9l�0����$��#G�y��0`��=���l>x��������`qq1������O�8A���edd���w�,�N����j�l��9��#j��E�hcc#x?��C�!!!�������YRR��s����H�����ZYYq���"���+������XZZJ+++<�/^�`YY�.]J===��}[���`UTx�+��g�}FKKK&''s��u������G_[NuP�RQ�Pp��Y$��'���P���2�f����

x��}���^m���t^bj#�v�Joo�*�����_��7o�6m���LFEEq��	\�n���E������Q���"���q������#���+��x-�����l��%{��!xIf���
��*��9#�FFF�Y�f"�JII	MMM�����,{m���_�xASSS�Y��$��woq���A�L�;v���l��'M�$�}���<x� I���������^$���!��`�ZZZ
�����(���c���^ IDAT��^m��M�FLMMlO�>��Q�F���H4��d�������@�i�Z:880<<\��Z���5bjj*5j$x��ra\�n]��w�-D�^oooa���T�d2���GH/--����u�7'L�����������-	���$���	}��|��%��;���t�VXXH�h�Lm������I�5J�-Z�����l��!
��� ����w��5���`FF��K}}}a8�o��tvv��]_}����[<q�����5�S�n]FFF��}��+�NFFF����-[���	�����q��tss
�7o�\�F��J���A������O)�($(���	D�w���{���'�4i���X!`��G����}��\j�F��q��r9>��C�������6}����G�`oo����C�Vc���B
��g��all��w������[8������<|�������R���4�j%_�?55���Qsss�l���*���h������qc������G����I4����h��
�`��dh��RSS���Sq��!x{{�u���6m���y��a��]��7p��m�������k��!&&������444���su����
FFF(--��_�: 00j�#G���I��~}ccc�l��w����7���/LF(**��/�#11�{�F@@6n�(����akk���P�/�������g&������oE6[[[l��o��@.�C�V��6�V+��������Ih�Z!��	I������sh��=`��)pqq���k1{�l�������UN*((@hh(�����G����@e�]Y���7;��]���3~������"
���O��X��������-��e�3�%$�$(��9{�,����>}�`���022�^�xQ�W�+jyyy"����U�O688VVV��kf���;v�W�^�_��P�V��)c���pqq�i�O?����ha{��h��	��_�/^���R����������B4�DUAccc����u����\��N�:x����-����[o����h���H4X[[�o����y3����c��E��c�����Z�"�6�0q�D��!""����O���l�1)))�E���/�����
"??���������#���������V���1t�P�?QQQ�������u�hpp0<<<p���`�=��G	������V���<����j����000@�&M�sZ��T(..�UK�8p��5:���R�r5j��u�
�(a���f�&$$���M�6�������@�P�������u��(,-|}�����������Vd+*+B����v�5*��7�/!�[���o�������'>���*�;666:k�={�L���iS���b��M��c``�!C�`��]5j����SEvvv5�Q�#F`��"I�,�QXX4h����:��T��!�G���%��j�������U��4m����322Dm>w���Y����c�������t�"�GFF"88�=��;0g�!-!!]�vDl�s���ppp�����n��x��)�9�Q�F	���d2t��111����?����TAdV���	����7���s��qCxx����������������������j����<�@���5�]�~}x{{���g��X�z5�o����p�t�J��{�"((u��A�:u����m��a��iUz�*���z������Y�F�E��&M��Z6k����())����h`hh��e�222OOOl������n;��0���~��?}��C���JH��H�J�1%%%:t("##���M$%%���'�m����<�����?�(�,))���Sq���6l�0\�r7n����w�.*��~�?������'�����:�L������GVS��h��-���L������O�������0��GR\\�'N��N��R�D�v������iii�|��`{��._���m�(�^�9��M����1m�4�����z���S�LAff&������G������033Cll�`{��	����u�V���b�������Dm=v�
5j�s�����)�	&�s�� ��={�������W��P���+!���T*k�����6l-ZT�������}��"�������,��o�K�.

��3p��QZii)�{�=<|��/�K�,Abb"�M��J�Q:u�,X��#GV)�������z�j��}�����=z���6ll�=�����Z��'O���c�
���V�IHH��(Q%Z�C�P���?�����_�>6l��M�6�����s�BCCE����b���1b�.]�n����w�AZZRRR`ll,�����c��m����?�=fff��u+�J%,X ����WWWDEE!22R�u�5k�����];�1r�[�lA�z�j���Jtt4:w��.]��K�.8}�4���o���.]��G����'����
6`������7������>���===�[����:*3h� t��}����������/�����x�����O���!.\X�p!���0a�A���rDDD`��9����������]�
u��S����8q"�����ysl��
^^^:t(���1~�x�?���ppp@bb"N�:�
6����=z`��E:t(���q��	������
�v�����z�-�7���������)����|�r�X�yyy�t��������3f`������|}}���|l��
;v��c�&���"44������3\]]QXX�'N������s'|||���{�����1c�����K�.055��7p��y:�����������d�j�
�W��U;�?�I�&���###�h��-B��
����#11�����m�
����(..F�N��r/\��   ����Du��r����V����x=z+��KHT�$aaa[[[������A�PXX�������_�&M���4���x��1\\\�j�*�m��h��)0|�p��WIII(,,D�=�n�:�p]���h��!�}�]a�(�02r�H ))	J������/�J��������=���c��!������O��M����������:t(�����/������-�;��#�QZZ
///�d�FkkkaH���~~~pvv��T*4i����U�->>���HHH��K����c��u5�\.��!C`kk��w�"--
�
��5kP�N������k�?��]���>���S�l�R{fff��i��]+��qu��y�r���K<z�999�����]+|��gO������D��-��'��8\�R����(�����s��x��!���1`�L�2m��Eaa!,--�����7o���M�6E``�p����������ppp@���u�eGGGxzz����G��V����Oa``�I�&!**��!���R����L������"ZFNN
����i�&�i�Fg___���������x��%��i�+V`��I��i�Z��������������H\�Dii)n��
���#00VVV��_QN��m��������c���022��?�[[[��.**������u����%���j���dh`���[�>���?)���?�������#HNN���0~�x���O��d				������k���p��q|������������� 	@	�vvv:k(�YDEE���G��q�������?iXBBBBBBB�i							��1$�?FQQ���zS��������gsYY


~�~o�Oe�J�h��
^�x�F����xs���EQb$ty�{FBB�������<{������{���D���iii�������_��k���ajj��{��Q��!66C�Annn��y���3g�_����(�w��Addd����F���=|�0�
���|�9s:tx�z+���X�155����w�>��#��7J��7o��k� ���������W�������w�^�;V��U����D����5k������@�������3g���s���������:K�Tp��m���<�V����=����a�����?���/�@���DBB���xyy!44
6|������{�.~�������*�
��o��?��V�3FvP�Vc����t��j5Z�n���G�������p��i?~\��+V�N�:������o��;������(�?^'}��������[���I�={�Z�E����&MB\\��V^��VK?z��7��C���������W�Zwww���"  �wk�7�|���<|����>�������%K�@�P@.�c��i		�1W���C�Ett4?~\c]+V������������q+W�D�V�0i�$�����;h���
����������������~�AT�V���������]�v���
�������Axx8������������G~S222�^�z!))	VVV�h4X�b����e���\�?���Rt��
3g����1������(�i�F�v�T*!��������p�B�k�N'�pU!� &&|��n������������w�����'N�`�����G��KKK�P(��{�	6�FC�����������T*�$���`���_�N�J���,j4�7nwii)�����jE�;vssskU�F��i�J�b^^^�u���/^���L�����o����}rssYRRRc�999B[���(��y��MfggS___t��J%8b������q��UYvVVU*U�u�$�-[F!m��)tss����hll�G�	���?�S��q���}{��S�j���7O��r�J��c�H��GGG�����A�������'MLLx��-��l�2�d2~��:����R__�S�Llk��%fgg������h��m[6j����_�)�JFDDP.������4�V���L��9r$~u�����}�&�j�������c���r��}[���s��i�&���-[(��D}w��e��={[~~>���y��UQ�=���9��k�F��{����_~��/�_B��B�C�]������R*�J�������"{dd$=<<���������zzz433�|�#������;����FFF�����5kH�����_�>���kQ=����������I����������!


��qcn��U�_�i�&���1..�
4 6j��W�\����Y�N���;SSS��~��'���S&�Q.�����������j9�|ZZZ�}���u����haaA�S����H�?8�����������������?����r��m��K��S�N��w��$�o�N[[[j�Z���#:B�w�����+�Z--,,���G###ZXX������	�r9���9`�Q�=~��~~~@ccc�����g�`�^�hddD===ZXXp���}�6O�<)jKZZp���"{||<���y����$KJJ(���l�2��g���U����O6�d��177���#��{��i400��g�DuT'U*w��Y�����$�C����[e9eeetqqa``�`�z�*���)��hjj���6�T��tww��3t�
ddd$Ir��u����WOOO�9sF������4�������466f�z��p�Bj�Z���������5
���8w�\�d�����W�j����'B?U.�����/&I>x��?���N��d2�]�V�����
4��i�Zq��q�1c�_*����m���e�KH�QH�oNU�*�J%]]]9~�x���h������B*�J��9�fff�T*�tttdPP�?N�Z�m��Q&�	�.]�088XT����������t��������
b^^U*?��3��r�;w�d���
hll���0fff2==�...trrbDD

x��}6h���'O&Y��h���w�.��v�Z�d2^�|�d��K__��N��V�eVVLOOO����O	�7n<��/���)H�)))���yW�������3f���K,..&I�5�F�RPP@kkk.\��d�������G�fnn.U*,X@333�#r��m6l��&L�2d���y��}��j��������,,,���c������\�|����l������|���`������cbb���������	�6l Y.L+�vu�������$y��1������^:^'KJJ���]����>q�DR�VW[��Y��������d�����m[fgg�����1����������"�]rr20>>���_�L&�����V�YPP���'����J����Z����������G���j�������Hxa3f�7o.���>}�o���s�`�����|��������W?��e2�]�&������0Q��k�����/^������$���Fj4�=��������y��!�R����� ���C�a�		a���I�111400y��~�m����d�7B.��
��i���p���7n$��yS�M�:U�p%�����������ZM~���$�������}���|��I�����!`�VK��"I�����}l��
]]]u����N���Lii)���CWWW�1yyy	��,��^�tI�������uk��������/������p�-�lI^�~�
���-�G�!x~�LMMe�~��P(����X�u�VcYs��%3&&�DC�����M\�t)��o�o��tvv�1�W_}%������}��	�Z����7LJJ
�r9ccc�<�-b��M��h�����������R���u����'O��Q������� ���?�>>>��{H����kWzxx�M�6<|�p�ySRR��Q#��N���7�W_}%l��w�u���'	@	�?)�?���>/^D||<5jx�����E�kdd�����7n��\.����!��{NNrrr����o�>DFF����8q��m�&�ajj��?�X����2��wO������6l�>}����kffkkk(
�-++@�L�V�Za�������������b������-[www0AAA���Y���������#99�g��yyy�{��	3x�z�-��IIIHKKC��]u����������S=zTtL������g�b������B~~>�]�&����C�R��
<<<p���j��L||<BCC�y��	�����������_jUNe/^����l...��?���r�j���r*&���r�����j��r���_�����kX&����]�f��[7l������g����C.�#00���@�����kW������^�^1�h��=��4>�Gxyy!&&;v�Z���={�p��_�G������F~~>�9�
6���
4��t������� |����=%%���&�h4���#"";v����+W��&�ZJJJ���!lW,�����
�:u��������-Z�MB��@��`233��Gh4\�pM�6�*f����������V�������q�[��)
�����������]��P(�VQF�@��}��U�����Ctt�����+�%��D���d2���,Tsss���===�=���022�_[[[��y_�5:�-[����k����#t�W�@W�T:�������(yU�%$$���S��SSS��{w4l�?������U�M�����c�������B�@QQ��/,,P����/���'OFTT�N�*����1v�X|��x��%RRR�%e���������W[nDD��'l7i�����<�|�r��{��7nccc8::(������''���@�Vc��}������c��ppp���QZZ
##�j�600@�&M���?x�y���DXX�>}���\$''c��Q�_p.]��������?~<T*.\�Y�f�6���-..� ggg8;;�����������Y��f�\�p/^����Ck��Q(���P~?���a��9���/��5, IDAT�<{������1}�t,^�Xt'$$���U����'�������i� �����T*����23U�T*uf��J%T*����h@Rd+++������y/!���W� %j��!���\zzz288X����m�@z����Y���/)��^;��������cff&��k'�}�x�b��WOg�oej;,��D�y�����^d9r$;t� ��J��d������,YRe=���3f���XXX�3��hhhh�O>������C�c���BBB8s�L����'tpp�;������*��<\TTD}}}���������g�����[Q��������411���u���� L,�����[mT�
��T��:DYA^^���'��V*����b�~��-s��)455���o�?N��ys��������Y�\����+W���u��A4�YVV��
r���={6����=��/_r��y��d�s��k�<�sOTE~~>MMM����2<<�����j��*'N����;u���������8s��-U���N�8Q�

��|u����4,�����?��S��������r8�M�6��dHHHl����a��5�*�s+X�b������o�A����/q���9RHk���0�SI,X��O���Z������-��r�J�o��q�F�]�VH�[�.���R(�J�QRR�|(���;v�-l|��e��3G�x�-g���Y�l��1h��9���ofU��j��j��-99�<666h������|�������dL�0[�lA�^�t��u��,�233�N�z���6:���5j���Ei���6l��FFF�7o����b�
�h2���X�~=>���Z������������[��������C����x������Ca������������c\�xQ����F�����������������w��b���1t�P�DAA�k�}||```����NT����}�va���aaa��y3���#jP���g����'N <<>lEEE�x�"����N1r�HDGG��gT�V�q��I�=s��1�kp��I8��6IHH�:�!��!K�,��7�����
�d�Z�J�������_�5�����gOL�2w�����>���a��e���:u*�{�=\�~NNN8{�,�9��G�
����c���X�|9Z�h???!�K�.x��w0d��7vvv8r��\�����?��===����S�N8w����8pm�����5����Dxyy!//_�5+++@�F��a����b���X�z5����>}� ++_|�
Tm�����C����x�<y			h������������
�����cG,^����x����`��X�j����S������\.���������[��
��������wc�����^�zU9$�{cll���������B�n�`oo���\=zTH�����8q"2331g�����C�����z�*�]��)S�`���:u
8P�|u���f��u{ccc���@t�����(,,��'���K���>>>+++�1�-Bff&
v�������FDD ::���0`�`�����#��/� 00/_���;������[C����n``�%K���?���������Dl��{���!22������#:w�,J

���>N�:Ue���9�w����CCC:t����?>���4EEE8s���;'�?00>>>(..F�N�j}>$$$~?�����(���IXXX   ...������!��`ddGGG�^���K����_�~�S�RRR`bb�U�V������h��
�r�����������h��%6n��������3����pssCrr2�������/��B�~K�����]�v�V@i44h�@4�B�R�q��leeeh��9|||��A���'���@���1g��m�EEE033���C��G<y�w��P�k�����v�����zzz

B��M���yyyHNN���!�O��������R���[B����D888 88XhkQQ
�6m�s�4h <tKKK���-xN����Z�;w��Q�F���O�:u� '':t@�N�������(�JL�>���077��T*!P>�����k���M��V>7AAA5��S*����c����W�"##������"������X�~�p�T ����K2�������F�A`` >��s6L4��Z����������Exxx������b���������B�@XX6m�$�G������
������>�.]
4n����B>l�����M������A�������d����o��X�j���^����S'!*���=��[��������qc|����0a������Rxzz
��U���	dgg�������;6o�,L�)((���#,--u�7'''��j���V����J����=|}}_s��8LMMu�g����#�������9u�������gg���
���5j?~���I�}HC����.]���s�b������z�*.\��y��a�������E�JHHH�I,_��FHHf��
==�?�
3f����W1h� L�8Q�,���������������CZFBBBBBBB�I��QTT$�I��(++C^^��n���������W��&�TF�T�Du��������KE(1��y�{FBB�������<{����G�.]��=*����]�v!--
vvv>|��,Iut�������w��lbcc1d��������s�����������=��"##5j�O>�a��!??g���%�kX�|9V�X!�oKMMEpp0�����>�����R�����q��5��r���a����.�s��e���c�����C�u�����La	www��j������3g���s����������p{��}�6v���@������������(����~��?�������A			HHH@zz:LLL������P�"����w�����m��������������c��������O�v�Z���[gI��>��O����������8�����������'!!Q;$���;v���QQQ8��N��������[�n�I�&8{�,Z�j�'N���g3i�$����n�%$$�
�����sg<z��7��C���������W�Zwww���
k��|��7�������������~~~X�d	
�r9�M�����*��a����������k�k��8x������7�r�J�j�
�&My����V�Za��ax��1lllP\\�?�������Dek�ZL�>��kLLL�P(������&&&"::�7�j��Y�W�^HJJ���4
V�X'''l���7��O���3h��
������������$�����h\�zU'��=��g��C�x�������-[���+�1IH�O�g����=�}����x���*c���R�P�b�j4���������h4����R�$I��+�R����E�F���.--eVV�N\���~����E�R1//���_W��/XVV�cwppb�Jnn�kc%���m����\.���7���M}}}��S*�tpp��#������U����R�j�����$�l�2�X�nnn�vTT������#�v������u�7n��oO<u�T�}P],��+W�;F��8::���]��<Ns��=ibb�[�n	�e��Q&���/��)?66�����2e�`{],����h��m[6j����_�)�JFDDP.������4�V���L��9r�(pm)..��7IW����~��VKWWW:T����$�a�:;;�����~��(-??�����z�*

hdd�9s��*�����

����6d���/�~�K����#����k��	����J�����|����Ia������������������w��s��444�������f��dzz:�������ZTO~~>����~�z�dbb"���ihhHCCC6n��[�n��Fn���nnn���c�
��5��+W�z�j��S����.
t��O?����2��r����������t�V�������R(��}{��u�eee��� ��S�NNN$��s��a���illL}}}����YYYB�m�����K��S'���wI���o���-�Z-y��!��wov���Z��������-,,���?
�����r�����0`���?~L???���1{�����g�W�^422���-,,�b�
��}�'O��%--��{�n�=>>�����z��o�%%%���\�l�������*��������
F�\����s�����=m�4���g�:��*��;w���w��i���C��{�������2���000P�]�z������d455eDD�
�����j���s��:e222�$�n�:���
����'��9#�}]zZZ{��Mccc�^�z\�p!�Z-���hgg���hQ���nnn�;w.I�{�����W�}~��%��?�\m�
<x@�B�'N�q��:p��}l��5
���x��fff��<X��&w���^����%$�H$�7�*XJ�����?~<������CBBXXXH�R��3g���L�J�����

�����V��m�6�d2����K����y3���������\ZYYq��A����J��g�}F�\�s�������a������L�������NNN���`AA����
p���$��]�f���{w���k�R&�����$�=\���<u��Z-���8x�`zzz�$�>}J��q��a\�x1MMM���@�LII���������Eggg��1��.]bqq1Ir��Q5����Z[[s���$�=�=z4sss�R��`��������$���o�a���0a�P��!Choo����S�Vs���477`aa!��K�������U�g��-444����[^^��������$�?N��a�raZ!��#,,�VVV$�c��8Pm�.����:XRRBoo�j���'����j����g��E===�$[�n��m�2;;�eee����iff&x������R��KNN&���������d��};�j5

8y�dZYYQ�T�6]���������|���Z-���{	/lc��a���E����O����;w.,XP�q�[��FFF����'�|���{��W����5
;v�(��U	����3,,���*D�������G"	@��V$�7�6P��p������� �.��J%---�g�����E�����{��$�����_o��6{��I��A ���em��axx8��	��7o��)��N�*z����AZVV�������Z���������aMLLD�+..��/H�?\C�Z��666�!E����#����M����������Y���R��������c����;Y���t��h���G�u��$�����/_.���G�!�-ZT���~�:
-Z$��1B�������~��Q�P���'��v��������K,..fLL���_%;;��t�R���p��}���\c����Jh���7	������Z-�7o.�����r���
y-Z��M�R������'&%%	�����������?O:��Q�F	����	@�t��"�����}3o�<��_�������������ys���1##C��z�j���/HU	�����������J�A����������HP��)�?���>/^D||�0���G!����7o.l��qr��7oE���ANN`����8q"������H<�'N���m��2LMM�����UVV�{�����������v��
��Oa�r{���`mm
�B!�eee(�i��U+���?��3���Q\\�R7}����e����� ((���033��?322���s$''c����=//p��=a�[o�%�7))	iii����N�yyy8p �>}��G����U���q��Y��;YYY�����k�����t�T*Q_���.\�Pm�����Ghh(���1o�<���~>|���K��������x�b����������@.�C�V�XN���\��|�ZM�T��{�k����@|
�d2��������u����7#44@�,���p��r������?�v����XZZ�k�+&���G4����������///����c��P����g.\X��!���l��	��=...X�|9>��S$''c���HHH�v6vJJ
���/L�LAABCCq��==z��+\YweEe��X
%�%��9C�.�*����������U�x��q�b`X6��MeHH�$�&33=z��F�����iS!�b�1�>�����Z�zzz:k��n�Z��P(��o_���"22�v��B�D[E�������F�d^^����m___�,�L&Z�D&����P�e����___���a������������
�����;.�c��?XW� ��t��X@#�R�
b���ZbTb���W�vL��,�\��F���c�(x��\�>�\������r����?�����93p�}��������w�{�n;v�v��P(����1n�8���}�
��3�����%
����Lxxxh<wAA���`ee��W�6�{u�����a�L��w�y&&&����>������ul����M����1c�������2e
����T���|.����O���c���j���S�r���������qrr��7^��������pvvP�x{xxhm��aB�J%�9��^,#  NNN8z�(d2�����[(����o����L�<#G�DQQ�������	&���s��u��� ##qqqP(HLL���s�����������nnnpss�\.G��-1y�d��;�����k����KDEE5���~h���qe���

���7�R��a��A,s��R���/PPP'''dff�}����]PXX���p������k��/�FMe
dY��7v�B���U
HM�2y���7�e�Vh��g0~W�h$��hs���������r���|��W�s���������n�J:::M�r=~�8	*))!___�����S���5v����.`^�������WC�{���A=�,Q�KW ��%K��D"�I�&���I$
�J���-[��5k�����<e�^Yxx8}�������Brrr���h��d��������-Zh��9r$��|��	��{������6���u+���G5�����m,h��������m
`C�\�
]�u���S����]�555$�h���ZeN�>�

97��������v�����\Nnnn�r�s���y�&�]���y���r9YYY��+(!!�����>�T*��������6Y��Q������� CCC��w/�?�F����
9~�x��>|8����[:��
�DDA���<O�>%777z�����]�=a.`�_�O��3 
q���F����u���233����l�[�.�\�;��+V��h�������n��k��!&&�����EEEN�8��-Z��/�����L�@���r�J�Y1`��M������V�Z�����L&CMM
gQ|���ZW^��=��~^`�7n`��yP(�kh��K�.i��&M�WWW���G���J�J��gi���������vb�������J�8v��ke�����?��]�0p�@����P��`�())\�p�I�]s���B��1a������$	����]OO.Dzz:V�X��M�����a�����A����YYYZ�/����� ��5K��.�����������s��h��%o\=z�o���w�P(��	����}��a���\]jj*�������(*++������P(��}�x���iRRR�sSSS�9;w�DZZ�@m�'O�h}�}��E�6m�w�Z���;___�D"��DD������k!���T*q��y���)S����)))��j3��s��d���s@�+���z�IDAT���U�V������w/�w��S����{7:t��w�}��OGnn.Z�h����g�����S�N�1c>��|��wpqq����q��	�>}����EDEEa��epww�;����"::�F����Sagg�'N����9r�o��
�@�#F�o��������M���������%F����lxzz����w�Fdd$D"�Zw���QPP�O?��W�FPP���0x�`���b��-6l���W�\�J��e�8�<233�����^������{����)�x�bH$<y�����3g-Z�U�Va����1cf��
]]].o��]_�p��EHMMEjj*�n������m���q��qDDD���!!!pttDYYN�>��;::r������������;��wo���"++�~�-�O��h���������`��;���=p�F��������D"��s� �Jq��Ax{{D"����>�%%%011Ajj*���4�R���"))	�����������0n�8|�������T*������gOt��*����B�K�,�'�|���s�����FJJ
���x}�<y2|}}����~����F��-Z�����CCC�_�c����������4������?n���q������o_����k���@�^�0z�h^{]]]����Y�F���_���O?�"���z���v�����fff���������������?��E:FFF������V�ZGGG8;;�[�nj���������x��!:u��M�6i��rrr�P(Dll������t��999x��1z���-[�p���j5LMM�U�R�T����m�P(������?W&�����
oooXXX 44���x�����0o�<������
��������PXX���<���-Z�m���Eee%���������G���999h��%>��#|��G��P&�����K����
'''���r}������	4��������d2t��:t���R�Dnn.����v�Z���/^�@�����o_t���������G}���P���ri
��
�]T������������������.����A@@@����n��'O���'���PQQsss���b��
���CGG���5j������s�T*���#99c���m��Kvvv����������s�}����111��������`�����y3�wTGxx8D"����-Z`������3lmm��gO����RRR0t�P�&(;;;6�������L&��!C�j�*���5Y������/�����_|��F�5[[[$''��?D@@�N&�����Sn�K�.��Vii)����}�v�����y�����}666<kt�������577GPP�kF�7B0�0�e���2"X.`�����p�������77�?���	����&7&1����`0���=�_��`��i������k��a���X�hS���fd0���e������GBB����Y�f!++��
C||</���x{`
 ��`0�[�`0����@�[	�����-������KS�6PUU��)�r9���"�H4d��j.�
��`��00�O��3gp��|������j5����+W����]�t��I���U+�����ann�������
��W�^�����qvvFbb"���B�={�����P���������y�~����m�6��b���v�������a`��=�����W����777`���������%"""4B��g��m(..�������{�.���K^���	\]]�h(���b<x�o��T*���5BBB���<�Z��'O����x��

��kW�1B#���M� 
1i����/����,���i^�z���x����HJJ����V�1x�`���q�W�Xooo.2�y��r�o�J��W�W[2$$2�HHH���>���1�|������R\\�M�6a���YB�����#88pqq�����cG�9s������C����y��F�?{�,��}^������ERRn�����;�<>��C����������v����3���@�RA$���{4h������3^����������H$�B���e�������������~�c0y~��s��g��A4`����/	��������	]�t�+���#�c�^[�TJ%%%�V���������5�)�HH"��Q�+++��A�����#�J����W������H���xm~g�Z�!��������r�F�R����RR*�Z�*
��/���:w�LD�y�����\��K�?~�������5O�L&�r�j�wII	�o�~~e""�XL����y�n�(88���z�������!���������������(55����wy�UUU��O���$�JEDDg��!]]]��������Mb��z����W*����E�����?��K�R7n	�|�2W��wo
		ym�.r���cQSSC���M��J�TYY��^�RQii))�_��_������W,0���2���9s&�;�����[�n8v�/���Z�j��e:�|���������8s�������������[����7n�����fii���
�5�������>}
��a �abb���.Is	�'�|���x������())��!C��U+a��	��6l�kkk������^^^\�����8p ���`ffCCCL�22��2���b��=PXX������������e�8�?��������+++^&���L.�kPPN�:�e;����s�JKK���`���������;������D"����j�*�s>|�������H$��+����b������III077���z�j�_���b�����g��T�u��9�:uz��u�URR�������Dt�����y��u����d\�v
'O����#;;�7oF��y�����e�������>kv�


p��!�GAA`����5kajj�6m��������dUUUa��qh��5abb��C�����k����M�6!&&���055���/Gyqq1"""`ll{{{�i�,���{�"7�7��2	�h
��h����Z��o�!t��"":�<��7�R��G�Q@@�x��E��������B� �DB���diiI����s�N��BCC)88���:D:::�}�vR�T���s
#WWW���`��h��uTSSC'N� ��}{JKK#�LF[�l!t��U""�z�*�������I�RQyy9������+'7::�z��A�?&"�[�n���-���'�LF�����������T*yxx����J����
��u��j��{wJII�,_r��LLL����Z�3==�tuu���{$���q;z�(UTTQ���)00����H�T���[	g��H$��U+;v,I�R��������������J����2�D���@eee�E�!aaa��_?^���G�������(99�[����[G����*++I ��y������$
)>>���bcc����g�l�����e�����)`zz:u��]����ND������f��M���TRRB>>>�����7n��b�q�������+����kceeE666�}�v�J����K���4v�X����?yxxP~~>��j:s������>�oInF}5p�ro���)��?�U�<yBNNN�����R��my����8|�p�����*++#]]]��k�~�������bj�������j?�
����@�N�"��)�����.]������h`���L&���������s<����	={���j�!C�h<c�5��m�����=K���k�k��������|�-���|�2���Quuu�����G"��f�����)�u��R��+W����y������������\��/H(����e�����!eggse��?'kkkn|�(�����KdggG����g��}�k������A���v�5�
����	=x���~=pxx8988���Y���B!)�Jz��	�BZ�d	���7���=}�������y���(&&�������G	�qqq��C�7~�_SUX*8�_�;w�`��A�����;�����h����W�b��@ @BB�����s��3'N��/��M:t����d�k��'��������<����dggg#++�;�������];^�V�Z����l�zzz\�KKK�|��W���G�PYY�G��u���b���!::���4h����� ���;�9c�������"���u��'33~~~044�����_#""��k"����#G�`������DUUT*�����m��-����]gnnkkk�r��R�0g�l��G�A�n�����x���`��1��U���;��utt��J���]}����Z�e���mV���M:v���\bjj
�B�\�G�A�P�K�.�>|+++��������9����.pm����@�V7�l7���7����s<�}�;�.-������?��H$�������{���o$��x���K�������	�j�*�h������K��b^{��Z������.���(���"""�#G� 66�������9%G�VC�Ri��������F��^��5k�p��}��6\����c��!22���������;\�|�k3r�Hxxx`��]8|�0��K���5����������S���;v���kr'33C��������'b��yX�`�k��I$���J��&M�����������y����X�����T*��Q������/�����T�8q�������x�*�
���h��
LMM��>x� ���B����<e���B�PC���\.Gaa!����p��y(�J�����```�&��]X�GHC���9�ZOOOk����FFF�����;���?��Tuu5T*W���'''H�R�{4����7��XSY	E���*��V���UU�6�;����������`
 �O���g���$�����F�X,��/xe�7���������j?f���8t�����+W�`���<=z���M������x���7�mS�Z�
��wGFF�Am,����;���������\8			HKK�h��������Yo�Cyy9���4B�<x'N��;���#G� //?��#�yD&��,r����H$P(
�j���S�P(0l�0����
��S� �H��)�qqq���+1k�,��;v��ae����!������T,^��Q�����P(<x0��w���
����Q�Fi���dHOOGhh�k,����;HJJ�Z�p�B�uUu?p�6C�Q7us��r>������C�y�~�u��;z��lAT�>������������HHHhT�jwTfee����+;x� ����/��9���b��i��q�6�Z7���s�N8;;�v�������'/�GQQf����	�[ ��`cc�)R��s7�������'O��z�������������6�S{��	�@�}�����~�z���j����gaff�s�b���X�n]����g�S���]�J�e������u���X�r%���[dff6j-��m�v�4w$''���D�Z���$&&����&M�P����1c�����O�>���0���c��i�y�&��T*�����g����]�C�AVV��c��!��#������������aii�YC�����p�m��{��i�C1�f�,���Y^�|�e0�S��O�ccc888`���X�f
JKK����b�������������X�Y��������q��}<�����f�Bjj*����P�={�@__��/�dB$a����={6�}�h�"�:u
�������R���m����ss���]
8����?>���p��L�:YYYX�z5������j���c��	����>RRR��u�>�}����������b������������7o���C��������O#((��Vk��%��d8}��F���@L�:UCNpp0���1j�(��o����>��������w��4h���1~�xL�2r�'O�D�N�4���QUU��K������
uA�kz���}��a���pqqAHHZ�j���\\�x!!!������HKK��a���W/���
���8{�,
>�}n��������%K��g�_�Yttt������HDGG����n���_~��{�r���077GRRf��������w�����{5�\3�7C�����Gw��h�J�;w�p�����!�aff���@"����033�m��E��]�xd�������+V�X�Z
���@__111�����{� �J�u����n������vvv�2e
LLL�:���@�V��~�J����c�|�rn��\.���;�:��\�.]����d2�����V���}�����=���`gg�e�X�`
�v�����;v���Saoo�e�h��
��Y���k��-���PVV'''xzz�O�>�����PPPl���S�6�rJ����kee%\]]ann�1Nnnn�f�@��� �n��(**��'O�O?�>>>���������0b����������X�n,,,�����akjj���ggg(
������]�/fff�����k��(��u��j5LLLccc������bcc!�PQQ���*���c����7o��k������G�^����+�������G�����56a�d2������F�����H��
�\�N�:�����JsssC �]�v:t(���p��]��b�[���|����W�^<[�P�m�����PkyFAA���agg���d����Y}�������,�[����`0�����2��`�e0��`0�-�)���`0oLd0��x�`
 ��`0�[S��`0�2��`0����@��`0���2��`�e0��`0�-�)���`0oLd0��x��V�9N����IEND�B`�
#8Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#7)
1 attachment(s)
Re: Built-in connection pooler

New version of the patch (rebased + bug fixes) is attached to this mail.

On 20.03.2019 18:32, Konstantin Knizhnik wrote:

Attached please find results of benchmarking of different connection
poolers.

Hardware configuration:
   Intel(R) Xeon(R) CPU           X5675  @ 3.07GHz
   24 cores (12 physical)
   50 GB RAM

Tests:
     pgbench read-write (scale 1): performance is mostly limited by
disk throughput
     pgbench select-only (scale 1): performance is mostly limited by
efficient utilization of CPU by all workers
     pgbench with YCSB-like workload with Zipf distribution:
performance is mostly limited by lock contention

Participants:
    1. pgbouncer (16 and 32 pool size, transaction level pooling)
    2. Postgres Pro-EE connection poller: redirection of client
connection to poll workers and maintaining of session contexts.
        16 and 32 connection pool size (number of worker backend).
    3. Built-in proxy connection pooler: implementation proposed in
this thread.
        16/1 and 16/2 specifies number of worker backends per proxy
and number of proxies, total number of backends is multiplication of
this values.
    4. Yandex Odyssey (32/2 and 64/4 configurations specifies number
of backends and Odyssey threads).
    5. Vanilla Postgres (marked at diagram as "12devel-master/2fadf24
POOL=none")

In all cases except 2) master branch of Postgres is used.
Client (pgbench), pooler and postgres are laucnhed at the same host.
Communication is though loopback interface (host=localhost).
We have tried to find the optimal parameters for each pooler.
Three graphics attached to the mail illustrate three different test
cases.

Few comments about this results:
- PgPro EE pooler shows the best results in all cases except tpc-b
like. In this case proxy approach is more efficient because more
flexible job schedule between workers
  (in EE sessions are scattered between worker backends at connect
time, while proxy chooses least loaded backend for each transaction).
- pgbouncer is not able to scale well because of its single-threaded
architecture. Certainly it is possible to spawn several instances of
pgbouncer and scatter
  workload between them. But we have not did it.
- Vanilla Postgres demonstrates significant degradation of performance
for large number of active connections on all workloads except read-only.
- Despite to the fact that Odyssey is new player (or may be because of
it), Yandex pooler doesn't demonstrate good results. It is the only
pooler which also cause degrade of performance with increasing number
of connections. May be it is caused by memory leaks, because it memory
footprint is also actively increased during test.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-3.patchtext/x-patch; name=builtin_connection_proxy-3.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d383de2..bee9725 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+		 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+		  "session_pool_size*connection_proxies*databases*roles.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..07f4202
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 4321, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolera, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. So developers of client applications still have a choice
+    either to avoid using session-specific operations either not to use pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a03ea14..8179918 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 96d196d..32d0c77 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index fc231ca..f77f299 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 515c290..cf851bd 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -561,6 +561,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index c39617a..522ff94 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..dd5caa0
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..53eece6 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..bdba0f6
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index fe59963..e1d4b87 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -559,6 +578,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -571,6 +632,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1067,6 +1131,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1090,32 +1159,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1184,29 +1257,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1216,6 +1292,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1373,6 +1463,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1610,6 +1702,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1700,8 +1843,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1895,8 +2048,6 @@ ProcessStartupPacket(Port *port, bool SSLdone)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1962,6 +2113,18 @@ ProcessStartupPacket(Port *port, bool SSLdone)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, SSLdone);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool SSLdone)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2036,7 +2199,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2703,6 +2866,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2780,6 +2945,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4009,6 +4177,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4018,8 +4187,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4123,6 +4292,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4819,6 +4990,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4959,6 +5131,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(false, 0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5027,7 +5212,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5485,6 +5669,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  */
@@ -6062,6 +6314,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6290,6 +6546,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySock, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..d7fcc7f
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1024 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: write(chan->backend_socket, buf, size);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: read(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+					proxy_add_client(proxy, port);
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 5965d36..85affef 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -153,6 +154,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -261,6 +263,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 59fa917..da651d0 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +763,30 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,7 +797,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
@@ -804,9 +834,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +874,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,19 +884,37 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -895,9 +945,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -910,7 +976,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -927,8 +993,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1330,7 +1396,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f9ce3d8..8955f0a 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4215,6 +4215,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970..16ca58d 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index a5950c1..b4b531f 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index cdb6a61..41e59d8 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -455,6 +455,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{ "sysv", SHMEM_TYPE_SYSV, false},
@@ -1245,6 +1253,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2088,6 +2106,42 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2135,6 +2189,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4458,6 +4522,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8033,6 +8107,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index 43c58c3..a964d75 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index c4b012c..1823aa1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10570,4 +10570,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 755819c..3156d08 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,8 +54,8 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern int StreamServerPort(int family, char *hostName,
-				 unsigned short portNumber, char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen);
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index b677c7e..710e0a6 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 7e2004b..dac878e 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..e101df1 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2af..05906e9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..7f7a92a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index fc99581..6e9696b 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 1cee7db..291d4ec 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index a0970b2..f3c8efe 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 3aaa8a9..c172e10 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 7abbd01..7566f51 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -15,6 +15,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#9Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#8)
Re: Built-in connection pooler

On Thu, Mar 21, 2019 at 4:33 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

New version of the patch (rebased + bug fixes) is attached to this mail.

Hi again Konstantin,

Interesting work. No longer applies -- please rebase.

--
Thomas Munro
https://enterprisedb.com

#10Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#9)
1 attachment(s)
Re: Built-in connection pooler

On 01.07.2019 12:57, Thomas Munro wrote:

On Thu, Mar 21, 2019 at 4:33 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

New version of the patch (rebased + bug fixes) is attached to this mail.

Hi again Konstantin,

Interesting work. No longer applies -- please rebase.

Rebased version of the patch is attached.
Also all this version of built-ni proxy can be found in conn_proxy
branch of https://github.com/postgrespro/postgresql.builtin_pool.git

Attachments:

builtin_connection_proxy-4.patchtext/x-patch; name=builtin_connection_proxy-4.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..9398e561e8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+		 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+		  "session_pool_size*connection_proxies*databases*roles.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..07f4202f75
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 4321, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolera, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. So developers of client applications still have a choice
+    either to avoid using session-specific operations either not to use pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..5b19fef481 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..029f0dc4e3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..acbaed313a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a841..10a14d0e03 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e70d..ebff20a61a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120bec55..e0cdd9e8bb 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..dd5caa0724
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e771e9..53eece6422 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c23211b2..9622ee79cb 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..bdba0f6e2c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad439ed..b75aedfc86 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(false, 0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5525,6 +5709,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySock, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..9616bbe5f2
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1024 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+					proxy_add_client(proxy, port);
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d733530f..6d32d8fe8d 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbcf8e..b5f66519d0 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,14 +763,29 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -767,7 +797,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
@@ -804,9 +834,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +874,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +884,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +947,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +978,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +995,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1402,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..62ec2afd2e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970f58..16ca58d9d0 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de256..79001ccf91 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..65f66db8e9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1285,6 +1293,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2137,6 +2155,42 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2184,6 +2238,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12236..dac74a272d 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..5f528c1d72 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a257616d..1e12ee1884 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2e3c..86c0ef84e5 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d912b..3ea24a3b70 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb397..e101df179f 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2afce5..05906e94a0 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..7f7a92a56a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11a8a..680eb5ee10 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72952..e7207e2d9a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976fafa..9ff45b190a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d802b1..fdf53e9a8d 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e5c2..39bd2de85e 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#11Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#10)
Re: Built-in connection pooler

On Tue, Jul 2, 2019 at 3:11 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 01.07.2019 12:57, Thomas Munro wrote:

Interesting work. No longer applies -- please rebase.

Rebased version of the patch is attached.
Also all this version of built-ni proxy can be found in conn_proxy
branch of https://github.com/postgrespro/postgresql.builtin_pool.git

Thanks Konstantin. I haven't looked at the code, but I can't help
noticing that this CF entry and the autoprepare one are both features
that come up again an again on feature request lists I've seen.
That's very cool. They also both need architectural-level review.
With my Commitfest manager hat on: reviewing other stuff would help
with that; if you're looking for something of similar complexity and
also the same level of
everyone-knows-we-need-to-fix-this-!@#$-we-just-don't-know-exactly-how-yet
factor, I hope you get time to provide some more feedback on Takeshi
Ideriha's work on shared caches, which doesn't seem a million miles
from some of the things you're working on.

Could you please fix these compiler warnings so we can see this
running check-world on CI?

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46324
https://travis-ci.org/postgresql-cfbot/postgresql/builds/555180678

--
Thomas Munro
https://enterprisedb.com

#12Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#11)
1 attachment(s)
Re: Built-in connection pooler

On 08.07.2019 3:37, Thomas Munro wrote:

On Tue, Jul 2, 2019 at 3:11 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 01.07.2019 12:57, Thomas Munro wrote:

Interesting work. No longer applies -- please rebase.

Rebased version of the patch is attached.
Also all this version of built-ni proxy can be found in conn_proxy
branch of https://github.com/postgrespro/postgresql.builtin_pool.git

Thanks Konstantin. I haven't looked at the code, but I can't help
noticing that this CF entry and the autoprepare one are both features
that come up again an again on feature request lists I've seen.
That's very cool. They also both need architectural-level review.
With my Commitfest manager hat on: reviewing other stuff would help
with that; if you're looking for something of similar complexity and
also the same level of
everyone-knows-we-need-to-fix-this-!@#$-we-just-don't-know-exactly-how-yet
factor, I hope you get time to provide some more feedback on Takeshi
Ideriha's work on shared caches, which doesn't seem a million miles
from some of the things you're working on.

Thank you, I will look at Takeshi Ideriha's patch.

Could you please fix these compiler warnings so we can see this
running check-world on CI?

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46324
https://travis-ci.org/postgresql-cfbot/postgresql/builds/555180678

Sorry, I do not have access to Windows host, so can not check Win32
build myself.
I have fixed all the reported warnings but can not verify that Win32
build is now ok.

Attachments:

builtin_connection_proxy-5.patchtext/x-patch; name=builtin_connection_proxy-5.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..9398e561e8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+		 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+		  "session_pool_size*connection_proxies*databases*roles.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..07f4202f75
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 4321, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolera, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. So developers of client applications still have a choice
+    either to avoid using session-specific operations either not to use pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..5b19fef481 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..029f0dc4e3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..acbaed313a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a841..10a14d0e03 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e70d..ebff20a61a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120bec55..e0cdd9e8bb 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..e395868eef
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e771e9..53eece6422 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c23211b2..5d8b65c50a 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -12,7 +12,9 @@ subdir = src/backend/postmaster
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
+override CPPFLAGS :=  $(CPPFLAGS) -I$(top_builddir)/src/port -I$(top_srcdir)/src/port
+
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..bdba0f6e2c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad439ed..73a695b5ee 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5525,6 +5709,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..ab058fa5f9
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1024 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+					proxy_add_client(proxy, port);
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d733530f..6d32d8fe8d 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbcf8e..b5f66519d0 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,14 +763,29 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -767,7 +797,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
@@ -804,9 +834,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +874,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +884,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +947,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +978,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +995,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1402,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..62ec2afd2e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970f58..16ca58d9d0 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de256..79001ccf91 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..65f66db8e9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1285,6 +1293,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2137,6 +2155,42 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2184,6 +2238,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12236..dac74a272d 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..5f528c1d72 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a257616d..1e12ee1884 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2e3c..86c0ef84e5 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d912b..3ea24a3b70 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb397..e101df179f 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2afce5..05906e94a0 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..7f7a92a56a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11a8a..680eb5ee10 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72952..e7207e2d9a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976fafa..9ff45b190a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d802b1..fdf53e9a8d 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e5c2..39bd2de85e 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#13Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#12)
Re: Built-in connection pooler

On Tue, Jul 9, 2019 at 8:30 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Rebased version of the patch is attached.

Thanks for including nice documentation in the patch, which gives a
good overview of what's going on. I haven't read any code yet, but I
took it for a quick drive to understand the user experience. These
are just some first impressions.

I started my server with -c connection_proxies=1 and tried to connect
to port 6543 and the proxy segfaulted on null ptr accessing
port->gss->enc. I rebuilt without --with-gssapi to get past that.

Using SELECT pg_backend_pid() from many different connections, I could
see that they were often being served by the same process (although
sometimes it created an extra one when there didn't seem to be a good
reason for it to do that). I could see the proxy managing these
connections with SELECT * FROM pg_pooler_state() (I suppose this would
be wrapped in a view with a name like pg_stat_proxies). I could see
that once I did something like SET foo.bar = 42, a backend became
dedicated to my connection and no other connection could use it. As
described. Neat.

Obviously your concept of tainted backends (= backends that can't be
reused by other connections because they contain non-default session
state) is quite simplistic and would help only the very simplest use
cases. Obviously the problems that need to be solved first to do
better than that are quite large. Personally I think we should move
all GUCs into the Session struct, put the Session struct into shared
memory, and then figure out how to put things like prepared plans into
something like Ideriha-san's experimental shared memory context so
that they also can be accessed by any process, and then we'll mostly
be tackling problems that we'll have to tackle for threads too. But I
think you made the right choice to experiment with just reusing the
backends that have no state like that.

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising. I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000) = 1 (0x1)

Ouch. I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux. FWIW the same happens on a Mac.

That's all I had time for today, but I'm planning to poke this some
more, and get a better understand of how this works at an OS level. I
can see fd passing, IO multiplexing, and other interesting things
happening. I suspect there are many people on this list who have
thoughts about the architecture we should use to allow a smaller
number of PGPROCs and a larger number of connections, with various
different motivations.

Thank you, I will look at Takeshi Ideriha's patch.

Cool.

Could you please fix these compiler warnings so we can see this
running check-world on CI?

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46324
https://travis-ci.org/postgresql-cfbot/postgresql/builds/555180678

Sorry, I do not have access to Windows host, so can not check Win32
build myself.

C:\projects\postgresql\src\include\../interfaces/libpq/libpq-int.h(33):
fatal error C1083: Cannot open include file: 'pthread-win32.h': No
such file or directory (src/backend/postmaster/proxy.c)
[C:\projects\postgresql\postgres.vcxproj]

These relative includes in proxy.c are part of the problem:

#include "../interfaces/libpq/libpq-fe.h"
#include "../interfaces/libpq/libpq-int.h"

I didn't dig into this much but some first reactions:

1. I see that proxy.c uses libpq, and correctly loads it as a dynamic
library just like postgres_fdw. Unfortunately it's part of core, so
it can't use the same technique as postgres_fdw to add the libpq
headers to the include path.

2. libpq-int.h isn't supposed to be included by code outside libpq,
and in this case it fails to find pthead-win32.h which is apparently
expects to find in either the same directory or the include path. I
didn't look into what exactly is going on (I don't have Windows
either) but I think we can say the root problem is that you shouldn't
be including that from backend code.

--
Thomas Munro
https://enterprisedb.com

#14Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#13)
Re: Built-in connection pooler

On 14.07.2019 8:03, Thomas Munro wrote:

On Tue, Jul 9, 2019 at 8:30 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Rebased version of the patch is attached.

Thanks for including nice documentation in the patch, which gives a
good overview of what's going on. I haven't read any code yet, but I
took it for a quick drive to understand the user experience. These
are just some first impressions.

I started my server with -c connection_proxies=1 and tried to connect
to port 6543 and the proxy segfaulted on null ptr accessing
port->gss->enc. I rebuilt without --with-gssapi to get past that.

First of all a lot of thanks for your review.

Sorry, I failed to reproduce the problem with GSSAPI.
I have configured postgres with --with-openssl --with-gssapi
and then try to connect to the serverwith psql using the following
connection string:

psql "sslmode=require dbname=postgres port=6543 krbsrvname=POSTGRES"

There are no SIGFAULS and port->gss structure is normally initialized.

Can you please explain me more precisely how to reproduce the problem
(how to configure postgres and how to connect to it)?

Using SELECT pg_backend_pid() from many different connections, I could
see that they were often being served by the same process (although
sometimes it created an extra one when there didn't seem to be a good
reason for it to do that). I could see the proxy managing these
connections with SELECT * FROM pg_pooler_state() (I suppose this would
be wrapped in a view with a name like pg_stat_proxies). I could see
that once I did something like SET foo.bar = 42, a backend became
dedicated to my connection and no other connection could use it. As
described. Neat.

Obviously your concept of tainted backends (= backends that can't be
reused by other connections because they contain non-default session
state) is quite simplistic and would help only the very simplest use
cases. Obviously the problems that need to be solved first to do
better than that are quite large. Personally I think we should move
all GUCs into the Session struct, put the Session struct into shared
memory, and then figure out how to put things like prepared plans into
something like Ideriha-san's experimental shared memory context so
that they also can be accessed by any process, and then we'll mostly
be tackling problems that we'll have to tackle for threads too. But I
think you made the right choice to experiment with just reusing the
backends that have no state like that.

That was not actually my idea: it was proposed by Dimitri Fontaine.
In PgPRO-EE we have another version of builtin connection pooler
which maintains session context and allows to use GUCs, prepared statements
and temporary tables in pooled sessions.
But the main idea of this patch was to make connection pooler less
invasive and
minimize changes in Postgres core. This is why I have implemented proxy.

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising. I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000) = 1 (0x1)

Ouch. I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux. FWIW the same happens on a Mac.

Yehh.
This is really the problem. In addition to FreeBSD and MacOS, it also
takes place at Win32.
I have to think more how to solve it. We should somehow emulate
"edge-triggered" more for this system.
The source of the problem is that proxy is reading data from one socket
and writing it in another socket.
If write socket is busy, we have to wait until is is available. But at
the same time there can be data available for input,
so poll will return immediately unless we remove read event for this
socket. Not here how to better do it without changing
WaitEvenSet API.

C:\projects\postgresql\src\include\../interfaces/libpq/libpq-int.h(33):
fatal error C1083: Cannot open include file: 'pthread-win32.h': No
such file or directory (src/backend/postmaster/proxy.c)
[C:\projects\postgresql\postgres.vcxproj]

I will investigate the problem with Win32 build once I get image of
Win32 for VirtualBox.

These relative includes in proxy.c are part of the problem:

#include "../interfaces/libpq/libpq-fe.h"
#include "../interfaces/libpq/libpq-int.h"

I didn't dig into this much but some first reactions:

I have added
override CPPFLAGS :=  $(CPPFLAGS) -I$(top_builddir)/src/port
-I$(top_srcdir)/src/port

in Makefile for postmaster in order to fix this problem (as in
interface/libpq/Makefile).
But looks like it is not enough. As I wrote above I will try to solve
this problem once I get access to Win32 environment.

1. I see that proxy.c uses libpq, and correctly loads it as a dynamic
library just like postgres_fdw. Unfortunately it's part of core, so
it can't use the same technique as postgres_fdw to add the libpq
headers to the include path.

2. libpq-int.h isn't supposed to be included by code outside libpq,
and in this case it fails to find pthead-win32.h which is apparently
expects to find in either the same directory or the include path. I
didn't look into what exactly is going on (I don't have Windows
either) but I think we can say the root problem is that you shouldn't
be including that from backend code.

Looks like proxy.c has to be moved somewhere outside core?
May be make it an extension? But it may be not so easy to implement because
proxy has to be tightly integrated with postmaster.

#15Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#14)
Re: Built-in connection pooler

On Mon, Jul 15, 2019 at 7:56 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Can you please explain me more precisely how to reproduce the problem
(how to configure postgres and how to connect to it)?

Maybe it's just that postmaster.c's ConnCreate() always allocates a
struct for port->gss if the feature is enabled, but the equivalent
code in proxy.c's proxy_loop() doesn't?

./configure
--prefix=$HOME/install/postgres \
--enable-cassert \
--enable-debug \
--enable-depend \
--with-gssapi \
--with-icu \
--with-pam \
--with-ldap \
--with-openssl \
--enable-tap-tests \
--with-includes="/opt/local/include" \
--with-libraries="/opt/local/lib" \
CC="ccache cc" CFLAGS="-O0"

~/install/postgres/bin/psql postgres -h localhost -p 6543

2019-07-15 08:37:25.348 NZST [97972] LOG: server process (PID 97974)
was terminated by signal 11: Segmentation fault: 11

(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0x0000000104163e7e
postgres`secure_read(port=0x00007fda9ef001d0, ptr=0x00000001047ab690,
len=8192) at be-secure.c:164:6
frame #1: 0x000000010417760d postgres`pq_recvbuf at pqcomm.c:969:7
frame #2: 0x00000001041779ed postgres`pq_getbytes(s="??\x04~?,
len=1) at pqcomm.c:1110:8
frame #3: 0x0000000104284147
postgres`ProcessStartupPacket(port=0x00007fda9ef001d0,
secure_done=true) at postmaster.c:2064:6
...
(lldb) f 0
frame #0: 0x0000000104163e7e
postgres`secure_read(port=0x00007fda9ef001d0, ptr=0x00000001047ab690,
len=8192) at be-secure.c:164:6
161 else
162 #endif
163 #ifdef ENABLE_GSS
-> 164 if (port->gss->enc)
165 {
166 n = be_gssapi_read(port, ptr, len);
167 waitfor = WL_SOCKET_READABLE;
(lldb) print port->gss
(pg_gssinfo *) $0 = 0x0000000000000000

Obviously your concept of tainted backends (= backends that can't be
reused by other connections because they contain non-default session
state) is quite simplistic and would help only the very simplest use
cases. Obviously the problems that need to be solved first to do
better than that are quite large. Personally I think we should move
all GUCs into the Session struct, put the Session struct into shared
memory, and then figure out how to put things like prepared plans into
something like Ideriha-san's experimental shared memory context so
that they also can be accessed by any process, and then we'll mostly
be tackling problems that we'll have to tackle for threads too. But I
think you made the right choice to experiment with just reusing the
backends that have no state like that.

That was not actually my idea: it was proposed by Dimitri Fontaine.
In PgPRO-EE we have another version of builtin connection pooler
which maintains session context and allows to use GUCs, prepared statements
and temporary tables in pooled sessions.

Interesting. Do you serialise/deserialise plans and GUCs using
machinery similar to parallel query, and did you changed temporary
tables to use shared buffers? People have suggested that we do that
to allow temporary tables in parallel queries too. FWIW I have also
wondered about per (database, user) pools for reusable parallel
workers.

But the main idea of this patch was to make connection pooler less
invasive and
minimize changes in Postgres core. This is why I have implemented proxy.

How do you do it without a proxy?

One idea I've wondered about that doesn't involve feeding all network
IO through an extra process and extra layer of syscalls like this is
to figure out how to give back your PGPROC slot when idle. If you
don't have one and can't acquire one at the beginning of a
transaction, you wait until one becomes free. When idle and not in a
transaction you give it back to the pool, perhaps after a slight
delay. That may be impossible for some reason or other, I don't know.
If it could be done, it'd deal with the size-of-procarray problem
(since we like to scan it) and provide a kind of 'admission control'
(limiting number of transactions that can run), but wouldn't deal with
the amount-of-backend-memory-wasted-on-per-process-caches problem
(maybe solvable by shared caching).

Ok, time for a little bit more testing. I was curious about the user
experience when there are no free backends.

1. I tested with connection_proxies=1, max_connections=4, and I began
3 transactions. Then I tried to make a 4th connection, and it blocked
while trying to connect and the log shows a 'sorry, too many clients
already' message. Then I committed one of my transactions and the 4th
connection was allowed to proceed.

2. I tried again, this time with 4 already existing connections and
no transactions. I began 3 concurrent transactions, and then when I
tried to begin a 4th transaction the BEGIN command blocked until one
of the other transactions committed.

Some thoughts: Why should case 1 block? Shouldn't I be allowed to
connect, even though I can't begin a transaction without waiting yet?
Why can I run only 3 transactions when I said max_connection=4? Ah,
that's probably because the proxy itself is eating one slot, and
indeed if I set connection_proxies to 2 I can now run only two
concurrent transactions. And yet when there were no transactions
running I could still open 4 connections. Hmm.

The general behaviour of waiting instead of raising an error seems
sensible, and that's how client-side connection pools usually work.
Perhaps some of the people who have wanted admission control were
thinking of doing it at the level of queries rather than whole
transactions though, I don't know. I suppose in extreme cases it's
possible to create invisible deadlocks, if a client is trying to
control more than one transaction concurrently, but that doesn't seem
like a real problem.

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising. I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000) = 1 (0x1)

Ouch. I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux. FWIW the same happens on a Mac.

Yehh.
This is really the problem. In addition to FreeBSD and MacOS, it also
takes place at Win32.
I have to think more how to solve it. We should somehow emulate
"edge-triggered" more for this system.
The source of the problem is that proxy is reading data from one socket
and writing it in another socket.
If write socket is busy, we have to wait until is is available. But at
the same time there can be data available for input,
so poll will return immediately unless we remove read event for this
socket. Not here how to better do it without changing
WaitEvenSet API.

Can't you do this by removing events you're not interested in right
now, using ModifyWaitEvent() to change between WL_SOCKET_WRITEABLE and
WL_SOCKET_READABLE as appropriate? Perhaps the problem you're worried
about is that this generates extra system calls in the epoll()
implementation? I think that's not a problem for poll(), and could be
fixed for the kqueue() implementation I plan to commit eventually. I
have no clue for Windows.

Looks like proxy.c has to be moved somewhere outside core?
May be make it an extension? But it may be not so easy to implement because
proxy has to be tightly integrated with postmaster.

I'm not sure. Seems like it should be solvable with the code in core.

--
Thomas Munro
https://enterprisedb.com

#16Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#15)
Re: Built-in connection pooler

On 15.07.2019 1:48, Thomas Munro wrote:

On Mon, Jul 15, 2019 at 7:56 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Can you please explain me more precisely how to reproduce the problem
(how to configure postgres and how to connect to it)?

Maybe it's just that postmaster.c's ConnCreate() always allocates a
struct for port->gss if the feature is enabled, but the equivalent
code in proxy.c's proxy_loop() doesn't?

./configure
--prefix=$HOME/install/postgres \
--enable-cassert \
--enable-debug \
--enable-depend \
--with-gssapi \
--with-icu \
--with-pam \
--with-ldap \
--with-openssl \
--enable-tap-tests \
--with-includes="/opt/local/include" \
--with-libraries="/opt/local/lib" \
CC="ccache cc" CFLAGS="-O0"

~/install/postgres/bin/psql postgres -h localhost -p 6543

2019-07-15 08:37:25.348 NZST [97972] LOG: server process (PID 97974)
was terminated by signal 11: Segmentation fault: 11

(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0x0000000104163e7e
postgres`secure_read(port=0x00007fda9ef001d0, ptr=0x00000001047ab690,
len=8192) at be-secure.c:164:6
frame #1: 0x000000010417760d postgres`pq_recvbuf at pqcomm.c:969:7
frame #2: 0x00000001041779ed postgres`pq_getbytes(s="??\x04~?,
len=1) at pqcomm.c:1110:8
frame #3: 0x0000000104284147
postgres`ProcessStartupPacket(port=0x00007fda9ef001d0,
secure_done=true) at postmaster.c:2064:6
...
(lldb) f 0
frame #0: 0x0000000104163e7e
postgres`secure_read(port=0x00007fda9ef001d0, ptr=0x00000001047ab690,
len=8192) at be-secure.c:164:6
161 else
162 #endif
163 #ifdef ENABLE_GSS
-> 164 if (port->gss->enc)
165 {
166 n = be_gssapi_read(port, ptr, len);
167 waitfor = WL_SOCKET_READABLE;
(lldb) print port->gss
(pg_gssinfo *) $0 = 0x0000000000000000

Thank you, fixed (pushed to
https://github.com/postgrespro/postgresql.builtin_pool.git repository).
I am not sure that GSS authentication works as intended, but at least

psql "sslmode=require dbname=postgres port=6543 krbsrvname=POSTGRES"

is normally connected.

Obviously your concept of tainted backends (= backends that can't be
reused by other connections because they contain non-default session
state) is quite simplistic and would help only the very simplest use
cases. Obviously the problems that need to be solved first to do
better than that are quite large. Personally I think we should move
all GUCs into the Session struct, put the Session struct into shared
memory, and then figure out how to put things like prepared plans into
something like Ideriha-san's experimental shared memory context so
that they also can be accessed by any process, and then we'll mostly
be tackling problems that we'll have to tackle for threads too. But I
think you made the right choice to experiment with just reusing the
backends that have no state like that.

That was not actually my idea: it was proposed by Dimitri Fontaine.
In PgPRO-EE we have another version of builtin connection pooler
which maintains session context and allows to use GUCs, prepared statements
and temporary tables in pooled sessions.

Interesting. Do you serialise/deserialise plans and GUCs using
machinery similar to parallel query, and did you changed temporary
tables to use shared buffers? People have suggested that we do that
to allow temporary tables in parallel queries too. FWIW I have also
wondered about per (database, user) pools for reusable parallel
workers.

No. The main difference between PG-Pro (conn_pool) and vanilla
(conn_proxy) version of connection pooler
is that first one  bind sessions to pool workers while last one is using
proxy to scatter requests between workers.

So in conn_pool postmaster accepts client connection and the schedule it
(using one of provided scheduling policies, i.e. round robin, random,
load balancing,...) to one of worker backends. Then client socket is
transferred to this backend and client and backend are connected directly.
Session is never rescheduled, i.e. it is bounded to backend. One backend
is serving multiple sessions. Sessions GUCs and some static variables
are saved in session context. Each session has its own temporary
namespace, so temporary tables of different sessions do not interleave.

Direct connection of client and backend allows to eliminate proxy
overhead. But at the same time - binding session to backend it is the
main drawback of this approach. Long living transaction can block all
other sessions scheduled to this backend.
I have though a lot about possibility to reschedule sessions. The main
"show stopper" is actually temporary tables.
Implementation of temporary tables is one of the "bad smelling" places
in Postgres causing multiple problems:
parallel queries, using temporary table at replica, catalog bloating,
connection pooling...
This is why I have tried to implement "global" temporary tables (like in
Oracle).
I am going to publish this patch soon: in this case table definition is
global, but data is local for each session.
Also global temporary tables are accessed through shared pool which
makes it possible to use them in parallel queries.
But it is separate story, not currently related with connection pooling.

Approach with proxy doesn't suffer from this problem.

But the main idea of this patch was to make connection pooler less
invasive and
minimize changes in Postgres core. This is why I have implemented proxy.

How do you do it without a proxy?

I hope I have explained it above.
Actually this version pf connection pooler is also available at github
https://github.com/postgrespro/postgresql.builtin_pool.git repository
branch conn_pool).

Ok, time for a little bit more testing. I was curious about the user
experience when there are no free backends.

1. I tested with connection_proxies=1, max_connections=4, and I began
3 transactions. Then I tried to make a 4th connection, and it blocked
while trying to connect and the log shows a 'sorry, too many clients
already' message. Then I committed one of my transactions and the 4th
connection was allowed to proceed.

2. I tried again, this time with 4 already existing connections and
no transactions. I began 3 concurrent transactions, and then when I
tried to begin a 4th transaction the BEGIN command blocked until one
of the other transactions committed.

Some thoughts: Why should case 1 block? Shouldn't I be allowed to
connect, even though I can't begin a transaction without waiting yet?
Why can I run only 3 transactions when I said max_connection=4? Ah,
that's probably because the proxy itself is eating one slot, and
indeed if I set connection_proxies to 2 I can now run only two
concurrent transactions. And yet when there were no transactions
running I could still open 4 connections. Hmm.

max_connections is not the right switch to control behavior of
connection pooler.
You should adjust session_pool_size, connection_proxies and max_sessions
parameters.

What happen in 1) case? Default value of session_pool_size is 10. It
means that postgres will spawn up to 10 backens for each database/user
combination. But max_connection limit will exhausted earlier. May be it
is better to prohibit setting session_pool_size larger than max_connection
or automatically adjust it according to max_connections. But it is also
no so easy to enforce, because separate set of pool workers is created for
each database/user combination.

So I agree that observer behavior is confusing, but I do not have good
idea how to improve it.

The general behaviour of waiting instead of raising an error seems
sensible, and that's how client-side connection pools usually work.
Perhaps some of the people who have wanted admission control were
thinking of doing it at the level of queries rather than whole
transactions though, I don't know. I suppose in extreme cases it's
possible to create invisible deadlocks, if a client is trying to
control more than one transaction concurrently, but that doesn't seem
like a real problem.

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising. I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000) = 1 (0x1)

Ouch. I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux. FWIW the same happens on a Mac.

Yehh.
This is really the problem. In addition to FreeBSD and MacOS, it also
takes place at Win32.
I have to think more how to solve it. We should somehow emulate
"edge-triggered" more for this system.
The source of the problem is that proxy is reading data from one socket
and writing it in another socket.
If write socket is busy, we have to wait until is is available. But at
the same time there can be data available for input,
so poll will return immediately unless we remove read event for this
socket. Not here how to better do it without changing
WaitEvenSet API.

Can't you do this by removing events you're not interested in right
now, using ModifyWaitEvent() to change between WL_SOCKET_WRITEABLE and
WL_SOCKET_READABLE as appropriate? Perhaps the problem you're worried
about is that this generates extra system calls in the epoll()
implementation? I think that's not a problem for poll(), and could be
fixed for the kqueue() implementation I plan to commit eventually. I
have no clue for Windows.

Window implementation is similar with poll().
Yes, definitely it can be done by removing read handle from wait even
set after each pending write operation.
But it seems to be very inefficient in case of epoll() implementation
(where changing event set requires separate syscall).
So either we have to add some extra function, i.e. WaitEventEdgeTrigger
which will do nothing for epoll(),
either somehow implement edge triggering inside WaitEvent*
implementation itself (not clear how to do it, since read/write
operations are
not performed through this API).

Looks like proxy.c has to be moved somewhere outside core?
May be make it an extension? But it may be not so easy to implement because
proxy has to be tightly integrated with postmaster.

I'm not sure. Seems like it should be solvable with the code in core.

I also think so. It is now working at Unix and I hope I can also fix
Win32 build.

#17Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#13)
1 attachment(s)
Re: Built-in connection pooler

On 14.07.2019 8:03, Thomas Munro wrote:

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising. I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000) = 1 (0x1)

Ouch. I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux. FWIW the same happens on a Mac.

I have committed patch which emulates epoll EPOLLET flag and so should
avoid busy loop with poll().
I will be pleased if you can check it at FreeBSD  box.

Attachments:

builtin_connection_proxy-7.patchtext/x-patch; name=builtin_connection_proxy-7.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..9398e561e8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+		 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+		  "session_pool_size*connection_proxies*databases*roles.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..07f4202f75
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 4321, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolera, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. So developers of client applications still have a choice
+    either to avoid using session-specific operations either not to use pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..5b19fef481 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..029f0dc4e3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..acbaed313a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a841..10a14d0e03 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e70d..ebff20a61a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120bec55..e0cdd9e8bb 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..e395868eef
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e771e9..53eece6422 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c23211b2..5d8b65c50a 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -12,7 +12,9 @@ subdir = src/backend/postmaster
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
+override CPPFLAGS :=  $(CPPFLAGS) -I$(top_builddir)/src/port -I$(top_srcdir)/src/port
+
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..bdba0f6e2c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad439ed..73a695b5ee 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5525,6 +5709,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..36f1a53987
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1046 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+	bool     write_pending;     /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0) /* do not accept more read events while write requests is pending */
+	{
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending && rc > 0)
+	{
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d733530f..6d32d8fe8d 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbcf8e..d2806b7399 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,14 +763,29 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -767,10 +797,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +840,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +880,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +890,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +953,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +984,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1001,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1408,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..62ec2afd2e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970f58..16ca58d9d0 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de256..79001ccf91 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..65f66db8e9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1285,6 +1293,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2137,6 +2155,42 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2184,6 +2238,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12236..dac74a272d 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..5f528c1d72 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a257616d..1e12ee1884 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2e3c..86c0ef84e5 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d912b..3ea24a3b70 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb397..e101df179f 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2afce5..05906e94a0 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..7f7a92a56a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11a8a..680eb5ee10 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72952..e7207e2d9a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976fafa..9ff45b190a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d802b1..fdf53e9a8d 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e5c2..39bd2de85e 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#18Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#17)
1 attachment(s)
Re: Built-in connection pooler

On 15.07.2019 17:04, Konstantin Knizhnik wrote:

On 14.07.2019 8:03, Thomas Munro wrote:

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising.  I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000)     = 1 (0x1)

Ouch.  I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux.  FWIW the same happens on a Mac.

I have committed patch which emulates epoll EPOLLET flag and so should
avoid busy loop with poll().
I will be pleased if you can check it at FreeBSD  box.

Sorry, attached patch was incomplete.
Please try this version of the patch.

Attachments:

builtin_connection_proxy-8.patchtext/x-patch; name=builtin_connection_proxy-8.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..9398e561e8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+		 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+		  "session_pool_size*connection_proxies*databases*roles.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..07f4202f75
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 4321, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolera, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. So developers of client applications still have a choice
+    either to avoid using session-specific operations either not to use pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..5b19fef481 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..029f0dc4e3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..acbaed313a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a841..10a14d0e03 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e70d..ebff20a61a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120bec55..e0cdd9e8bb 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..e395868eef
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e771e9..53eece6422 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c23211b2..5d8b65c50a 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -12,7 +12,9 @@ subdir = src/backend/postmaster
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
+override CPPFLAGS :=  $(CPPFLAGS) -I$(top_builddir)/src/port -I$(top_srcdir)/src/port
+
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..bdba0f6e2c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad439ed..73a695b5ee 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5525,6 +5709,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..1531bd7554
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1061 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+	bool     write_pending;      /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool     read_pending;       /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d733530f..6d32d8fe8d 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbcf8e..d2806b7399 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,14 +763,29 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -767,10 +797,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +840,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +880,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +890,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +953,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +984,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1001,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1408,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..62ec2afd2e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970f58..16ca58d9d0 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de256..79001ccf91 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..65f66db8e9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1285,6 +1293,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2137,6 +2155,42 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2184,6 +2238,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12236..dac74a272d 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..5f528c1d72 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a257616d..1e12ee1884 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2e3c..86c0ef84e5 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d912b..3ea24a3b70 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb397..e101df179f 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2afce5..05906e94a0 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..7f7a92a56a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11a8a..680eb5ee10 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72952..e7207e2d9a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976fafa..9ff45b190a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d802b1..fdf53e9a8d 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e5c2..39bd2de85e 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#19Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#18)
Re: Built-in connection pooler

Hi Konstantin,

Thanks for your work on this. I'll try to do more testing in the next few
days, here's what I have so far.

make installcheck-world: passed

The v8 patch [1]/messages/by-id/attachment/102610/builtin_connection_proxy-8.patch applies, though I get indent and whitespace errors:

<stdin>:79: tab in indent.
"Each proxy launches its own subset of backends. So
maximal number of non-tainted backends is "
<stdin>:80: tab in indent.
"session_pool_size*connection_proxies*databases*roles.
<stdin>:519: indent with spaces.
char buf[CMSG_SPACE(sizeof(sock))];
<stdin>:520: indent with spaces.
memset(buf, '\0', sizeof(buf));
<stdin>:522: indent with spaces.
/* On Mac OS X, the struct iovec is needed, even if it points to
minimal data */
warning: squelched 82 whitespace errors
warning: 87 lines add whitespace errors.

In connpool.sgml:

"but it can be changed to standard Postgres 4321"

Should be 5432?

" As far as pooled backends are not terminated on client exist, it will not
be possible to drop database to which them are connected."

Active discussion in [2]/messages/by-id/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com might change that, it is also in this July
commitfest [3]https://commitfest.postgresql.org/23/2055/.

"Unlike pgbouncer and other external connection poolera"

Should be "poolers"

"So developers of client applications still have a choice
either to avoid using session-specific operations either not to use
pooling."

That sentence isn't smooth for me to read. Maybe something like:
"Developers of client applications have the choice to either avoid using
session-specific operations, or not use built-in pooling."

[1]: /messages/by-id/attachment/102610/builtin_connection_proxy-8.patch
/messages/by-id/attachment/102610/builtin_connection_proxy-8.patch

[2]: /messages/by-id/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
/messages/by-id/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com

[3]: https://commitfest.postgresql.org/23/2055/

Ryan Lambert

On Tue, Jul 16, 2019 at 12:20 AM Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Show quoted text

On 15.07.2019 17:04, Konstantin Knizhnik wrote:

On 14.07.2019 8:03, Thomas Munro wrote:

On my FreeBSD box (which doesn't have epoll(), so it's latch.c's old
school poll() for now), I see the connection proxy process eating a
lot of CPU and the temperature rising. I see with truss that it's
doing this as fast as it can:

poll({ 13/POLLIN 17/POLLIN|POLLOUT },2,1000) = 1 (0x1)

Ouch. I admit that I had the idea to test on FreeBSD because I
noticed the patch introduces EPOLLET and I figured this might have
been tested only on Linux. FWIW the same happens on a Mac.

I have committed patch which emulates epoll EPOLLET flag and so should
avoid busy loop with poll().
I will be pleased if you can check it at FreeBSD box.

Sorry, attached patch was incomplete.
Please try this version of the patch.

#20Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#19)
1 attachment(s)
Re: Built-in connection pooler

Hi, Ryan

On 18.07.2019 6:01, Ryan Lambert wrote:

Hi Konstantin,

Thanks for your work on this.  I'll try to do more testing in the next
few days, here's what I have so far.

make installcheck-world: passed

The v8 patch [1] applies, though I get indent and whitespace errors:

<stdin>:79: tab in indent.
                 "Each proxy launches its own subset of backends.
So maximal number of non-tainted backends is "
<stdin>:80: tab in indent.
"session_pool_size*connection_proxies*databases*roles.
<stdin>:519: indent with spaces.
    char buf[CMSG_SPACE(sizeof(sock))];
<stdin>:520: indent with spaces.
    memset(buf, '\0', sizeof(buf));
<stdin>:522: indent with spaces.
    /* On Mac OS X, the struct iovec is needed, even if it points
to minimal data */
warning: squelched 82 whitespace errors
warning: 87 lines add whitespace errors.

In connpool.sgml:

"but it can be changed to standard Postgres 4321"

Should be 5432?

" As far as pooled backends are not terminated on client exist, it
will not
    be possible to drop database to which them are connected."

Active discussion in [2] might change that, it is also in this July
commitfest [3].

"Unlike pgbouncer and other external connection poolera"

Should be "poolers"

"So developers of client applications still have a choice
    either to avoid using session-specific operations either not to
use pooling."

That sentence isn't smooth for me to read.  Maybe something like:
"Developers of client applications have the choice to either avoid
using session-specific operations, or not use built-in pooling."

[1]
/messages/by-id/attachment/102610/builtin_connection_proxy-8.patch

[2]
/messages/by-id/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com

[3] https://commitfest.postgresql.org/23/2055/

Thank you for review.
I have fixed all reported issues except one related with "dropdb
--force" discussion.
As far as this patch is not yet committed, I can not rely on it yet.
Certainly I can just remove this sentence from documentation, assuming
that this patch will be committed soon.
But then some extra efforts will be needed to terminated pooler backends
of dropped database.

Attachments:

builtin_connection_proxy-9.patchtext/x-patch; name=builtin_connection_proxy-9.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..50be793e26 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connection are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will server each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+		  Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..8486ce1e8d
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..5b19fef481 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..029f0dc4e3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..acbaed313a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a841..10a14d0e03 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e70d..ebff20a61a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120bec55..e0cdd9e8bb 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..a76db8d171
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e771e9..53eece6422 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c23211b2..5d8b65c50a 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -12,7 +12,9 @@ subdir = src/backend/postmaster
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
+override CPPFLAGS :=  $(CPPFLAGS) -I$(top_builddir)/src/port -I$(top_srcdir)/src/port
+
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..bdba0f6e2c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad439ed..73a695b5ee 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for poolled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for locahost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do dome smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5525,6 +5709,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..1531bd7554
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1061 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+	bool     write_pending;      /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool     read_pending;       /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, Port* client_port);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (!chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, chan->client_port);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, Port* client_port)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d733530f..6d32d8fe8d 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbcf8e..d2806b7399 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,14 +763,29 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -767,10 +797,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +840,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +880,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +890,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +953,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +984,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1001,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1408,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..62ec2afd2e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970f58..16ca58d9d0 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de256..79001ccf91 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..65f66db8e9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1285,6 +1293,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2137,6 +2155,42 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2184,6 +2238,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12236..dac74a272d 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..5f528c1d72 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a257616d..1e12ee1884 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2e3c..86c0ef84e5 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d912b..3ea24a3b70 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb397..e101df179f 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2afce5..05906e94a0 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..7f7a92a56a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11a8a..680eb5ee10 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72952..e7207e2d9a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976fafa..9ff45b190a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d802b1..fdf53e9a8d 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e5c2..39bd2de85e 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
#21Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#20)
Re: Built-in connection pooler

I have fixed all reported issues except one related with "dropdb --force"
discussion.
As far as this patch is not yet committed, I can not rely on it yet.
Certainly I can just remove this sentence from documentation, assuming
that this patch will be committed soon.
But then some extra efforts will be needed to terminated pooler backends
of dropped database.

Great, thanks. Understood about the non-committed item. I did mark that
item as ready for committer last night so we will see. I should have time
to put the actual functionality of your patch to test later today or
tomorrow. Thanks,

Ryan Lambert

#22Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#20)
Re: Built-in connection pooler

Here's what I found tonight in your latest patch (9). The output from git
apply is better, fewer whitespace errors, but still getting the following.
Ubuntu 18.04 if that helps.

git apply -p1 < builtin_connection_proxy-9.patch
<stdin>:79: tab in indent.
Each proxy launches its own subset of backends.
<stdin>:634: indent with spaces.
union {
<stdin>:635: indent with spaces.
struct sockaddr_in inaddr;
<stdin>:636: indent with spaces.
struct sockaddr addr;
<stdin>:637: indent with spaces.
} a;
warning: squelched 54 whitespace errors
warning: 59 lines add whitespace errors.

A few more minor edits. In config.sgml:

"If the <varname>max_sessions</varname> limit is reached new connection are
not accepted."
Should be "connections".

"The default value is 10, so up to 10 backends will server each database,"
"sever" should be "serve" and the sentence should end with a period instead
of a comma.

In postmaster.c:

/* The socket number we are listening for poolled connections on */
"poolled" --> "pooled"

"(errmsg("could not create listen socket for locahost")));"

"locahost" -> "localhost".

" * so to support order balancing we should do dome smart work here."

"dome" should be "some"?

I don't see any tests covering this new functionality. It seems that this
is significant enough functionality to warrant some sort of tests, but I
don't know exactly what those would/should be.

Thanks,
Ryan

#23Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#22)
1 attachment(s)
Re: Built-in connection pooler

On 19.07.2019 6:36, Ryan Lambert wrote:

Here's what I found tonight in your latest patch (9).  The output from
git apply is better, fewer whitespace errors, but still getting the
following.  Ubuntu 18.04 if that helps.

git apply -p1 < builtin_connection_proxy-9.patch
<stdin>:79: tab in indent.
                  Each proxy launches its own subset of backends.
<stdin>:634: indent with spaces.
    union {
<stdin>:635: indent with spaces.
       struct sockaddr_in inaddr;
<stdin>:636: indent with spaces.
       struct sockaddr addr;
<stdin>:637: indent with spaces.
    } a;
warning: squelched 54 whitespace errors
warning: 59 lines add whitespace errors.

A few more minor edits.  In config.sgml:

"If the <varname>max_sessions</varname> limit is reached new
connection are not accepted."
Should be "connections".

"The default value is 10, so up to 10 backends will server each database,"
"sever" should be "serve" and the sentence should end with a period
instead of a comma.

In postmaster.c:

/* The socket number we are listening for poolled connections on */
"poolled" --> "pooled"

"(errmsg("could not create listen socket for locahost")));"

"locahost" -> "localhost".

" * so to support order balancing we should do dome smart work here."

"dome" should be "some"?

I don't see any tests covering this new functionality.  It seems that
this is significant enough functionality to warrant some sort of
tests, but I don't know exactly what those would/should be.

Thank you once again for this fixes.
Updated patch is attached.

Concerning testing: I do not think that connection pooler needs some
kind of special tests.
The idea of built-in connection pooler is that it should be able to
handle all requests normal postgres can do.
I have added to regression tests extra path with enabled connection proxies.
Unfortunately, pg_regress is altering some session variables, so it
backend becomes tainted and so
pooling is not actually used (but communication through proxy is tested).

It is  also possible to run pg_bench with different number of
connections though connection pooler.

Show quoted text

Thanks,
Ryan

Attachments:

builtin_connection_proxy-10.patchtext/x-patch; name=builtin_connection_proxy-10.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..f8b93f16ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..8486ce1e8d
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..5b19fef481 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..029f0dc4e3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..acbaed313a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a841..10a14d0e03 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e70d..ebff20a61a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120bec55..e0cdd9e8bb 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..a76db8d171
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e771e9..1564c8c611 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c23211b2..5d8b65c50a 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -12,7 +12,9 @@ subdir = src/backend/postmaster
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
+override CPPFLAGS :=  $(CPPFLAGS) -I$(top_builddir)/src/port -I$(top_srcdir)/src/port
+
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..bdba0f6e2c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
+
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad439ed..57d856fd64 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5525,6 +5709,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..6314f1eb8d
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1073 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE      (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE       101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*    buf;
+	int      rx_pos;
+	int      tx_pos;
+	int      tx_size;
+	int      buf_size;
+	int      event_pos;          /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*    client_port;        /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*  backend_proc;
+	int      backend_pid;
+	bool     backend_is_tainted; /* client changes session context */
+	bool     backend_is_ready;   /* ready for query */
+	bool     is_interrupted;     /* client interrupts query execution */
+	bool     is_disconnected;    /* connection is lost */
+	bool     write_pending;      /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool     read_pending;       /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int      handshake_response_size;
+	char*    handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*   proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;        /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;        /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;   /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*    pools;              /* Session pool map with dbname/role used as a key */
+	int      n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int      max_backends;       /* Maximal number of backends per database */
+	bool     shutdown;           /* Shutdown flag */
+	Channel* hangout;            /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;       /* List of idle clients */
+	Channel* pending_clients;     /* List of clients waiting for free backend */
+	Proxy*   proxy;               /* Owner of this pool */
+	int      n_launched_backends; /* Total number of launched backends */
+	int      n_idle_backends;     /* Number of backends in idle state */
+	int      n_connected_clients; /* Total number of connected clients */
+	int      n_idle_clients;      /* Number of clients in idle state */
+	int      n_pending_clients;   /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+            /* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int  msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'  /* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)  /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+                        /* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		close(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		close(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		close(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*  proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+			    if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.  Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid            - proxy process identifier
+ * n_clients      - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools        - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends     - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes       - amount of data sent from backends to clients
+ * rx_bytes       - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+    FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+        ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+        get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
+
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d733530f..6d32d8fe8d 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbcf8e..d2806b7399 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -77,6 +77,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +138,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +586,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +693,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +724,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,14 +763,29 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -767,10 +797,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +840,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +880,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +890,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +953,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +984,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1001,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1408,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..62ec2afd2e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970f58..16ca58d9d0 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de256..79001ccf91 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..65f66db8e9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1285,6 +1293,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2137,6 +2155,42 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2184,6 +2238,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12236..dac74a272d 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..5f528c1d72 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a257616d..1e12ee1884 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2e3c..86c0ef84e5 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d912b..3ea24a3b70 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb397..e101df179f 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,7 +456,8 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
-
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
+ 
 extern int	pgwin32_noblock;
 
 #endif							/* FRONTEND */
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2afce5..05906e94a0 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..7f7a92a56a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11a8a..680eb5ee10 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -177,6 +179,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72952..e7207e2d9a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976fafa..9ff45b190a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d802b1..fdf53e9a8d 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e5c2..39bd2de85e 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4e01..38dda4dfe5 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000000..ebaa257f4b
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
#24Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#23)
3 attachment(s)
Re: Built-in connection pooler

Hello Konstantin,

Concerning testing: I do not think that connection pooler needs some kind

of special tests.

The idea of built-in connection pooler is that it should be able to

handle all requests normal postgres can do.

I have added to regression tests extra path with enabled connection

proxies.

Unfortunately, pg_regress is altering some session variables, so it

backend becomes tainted and so

pooling is not actually used (but communication through proxy is tested).
Thank you for your work on this patch, I took some good time to really

explore the configuration and do some testing with pgbench. This round of
testing was done against patch 10 [1]/messages/by-id/attachment/102732/builtin_connection_proxy-10.patch and master branch commit a0555ddab9
from 7/22.

Thank you for explaining, I wasn't sure.

make installcheck-world: tested, passed
Implements feature: tested, passed
Documentation: I need to review again, I saw typos when testing but didn't
make note of the details.

Applying the patch [1]/messages/by-id/attachment/102732/builtin_connection_proxy-10.patch has improved from v9, still getting these:

git apply -p1 < builtin_connection_proxy-10.patch
<stdin>:1536: indent with spaces.
/* Has pending clients: serve one of them */
<stdin>:1936: indent with spaces.
/* If we attach new client to the existed backend,
then we need to send handshake response to the client */
<stdin>:2208: indent with spaces.
if (port->sock == PGINVALID_SOCKET)
<stdin>:2416: indent with spaces.
FuncCallContext* srf_ctx;
<stdin>:2429: indent with spaces.
ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
warning: squelched 5 whitespace errors
warning: 10 lines add whitespace errors.

I used a DigitalOcean droplet with 2 CPU and 2 GB RAM and SSD for this
testing, Ubuntu 18.04. I chose the smaller server size based on the
availability of similar and recent results around connection pooling [2]http://richyen.com/postgres/2019/06/25/pools_arent_just_for_cars.html
that used AWS EC2 m4.large instance (2 cores, 8 GB RAM) and pgbouncer.
Your prior pgbench tests [3]/messages/by-id/ede4470a-055b-1389-0bbd-840f0594b758@postgrespro.ru also focused on larger servers so I wanted to
see how this works on smaller hardware.

Considering this from connpool.sgml:
"<varname>connection_proxies</varname> specifies number of connection proxy
processes which will be spawned. Default value is zero, so connection
pooling is disabled by default."

That hints to me that connection_proxies is the main configuration to start
with so that was the only configuration I changed from the default for this
feature. I adjusted shared_buffers to 500MB (25% of total) and
max_connections to 1000. Only having one proxy gives subpar performance
across the board, so did setting this value to 10. My hunch is this value
should roughly follow the # of cpus available, but that's just a hunch.

I tested with 25, 75, 150, 300 and 600 connections. Initialized with a
scale of 1000 and ran read-only tests. Basic pgbench commands look like
the following, I have full commands and results from 18 tests included in
the attached MD file. Postgres was restarted between each test.

pgbench -i -s 1000 bench_test
pgbench -p 6543 -c 300 -j 2 -T 600 -P 60 -S bench_test

Tests were all ran from the same server. I intend to do further testing
with external connections using SSL.

General observations:
For each value of connection_proxies, the TPS observed at 25 connections
held up reliably through 600 connections. For this server using
connection_proxies = 2 was the fastest, but setting to 1 or 10 still
provided **predictable** throughput. That seems to be a good attribute for
this feature.

Also predictable was the latency increase, doubling the connections roughly
doubles the latency. This was true with or without connection pooling.

Focusing on disabled connection pooling vs the feature with two proxies,
the results are promising, the breakpoint seems to be around 150
connections.

Low connections (25): -15% TPS; +45% latency
Medium connections (300): +21% TPS; +38% latency
High connections (600): Couldn't run without pooling... aka: Win for
pooling!

The two attached charts show the TPS and average latency of these two
scenarios. This feature does a great job of maintaining a consistent TPS
as connection numbers increase. This comes with tradeoffs of lower
throughput with < 150 connections, and higher latency across the board.
The increase in latency seems reasonable to me based on the testing I have
done so far. Compared to the results from [2]http://richyen.com/postgres/2019/06/25/pools_arent_just_for_cars.html it seems latency is affecting
this feature a bit more than it does pgbouncer, yet not unreasonably so
given the benefit of having the feature built in and the reduced complexity.

I don't understand yet how max_sessions ties in.
Also, having both session_pool_size and connection_proxies seemed confusing
at first. I still haven't figured out exactly how they relate together in
the overall operation and their impact on performance. The new view
helped, I get the concept of **what** it is doing (connection_proxies =
more rows, session_pool_size = n_backends for each row), it's more a lack
of understanding the **why** regarding how it will operate.

postgres=# select * from pg_pooler_state();
pid | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | tx_bytes | rx_bytes | n_transactions
------+-----------+---------------+---------+------------+----------------------+-----------+-----------+----------------
1682 | 75 | 0 | 1 | 10 |
0 | 366810458 | 353181393 | 5557109
1683 | 75 | 0 | 1 | 10 |
0 | 368464689 | 354778709 | 5582174
(2 rows

I am not sure how I feel about this:
"Non-tainted backends are not terminated even if there are no more
connected sessions."

Would it be possible (eventually) to monitor connection rates and free up
non-tainted backends after a time? The way I'd like to think of that
working would be:

If 50% of backends are unused for more than 1 hour, release 10% of
established backends.

The two percentages and time frame would ideally be configurable, but setup
in a way that it doesn't let go of connections too quickly, causing
unnecessary expense of re-establishing those connections. My thought is if
there's one big surge of connections followed by a long period of lower
connections, does it make sense to keep those extra backends established?

I'll give the documentation another pass soon. Thanks for all your work on
this, I like what I'm seeing so far!

[1]: /messages/by-id/attachment/102732/builtin_connection_proxy-10.patch
/messages/by-id/attachment/102732/builtin_connection_proxy-10.patch
[2]: http://richyen.com/postgres/2019/06/25/pools_arent_just_for_cars.html
[3]: /messages/by-id/ede4470a-055b-1389-0bbd-840f0594b758@postgrespro.ru
/messages/by-id/ede4470a-055b-1389-0bbd-840f0594b758@postgrespro.ru

Thanks,
Ryan Lambert

On Fri, Jul 19, 2019 at 3:10 PM Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Show quoted text

On 19.07.2019 6:36, Ryan Lambert wrote:

Here's what I found tonight in your latest patch (9). The output from git
apply is better, fewer whitespace errors, but still getting the following.
Ubuntu 18.04 if that helps.

git apply -p1 < builtin_connection_proxy-9.patch
<stdin>:79: tab in indent.
Each proxy launches its own subset of backends.
<stdin>:634: indent with spaces.
union {
<stdin>:635: indent with spaces.
struct sockaddr_in inaddr;
<stdin>:636: indent with spaces.
struct sockaddr addr;
<stdin>:637: indent with spaces.
} a;
warning: squelched 54 whitespace errors
warning: 59 lines add whitespace errors.

A few more minor edits. In config.sgml:

"If the <varname>max_sessions</varname> limit is reached new connection
are not accepted."
Should be "connections".

"The default value is 10, so up to 10 backends will server each database,"
"sever" should be "serve" and the sentence should end with a period
instead of a comma.

In postmaster.c:

/* The socket number we are listening for poolled connections on */
"poolled" --> "pooled"

"(errmsg("could not create listen socket for locahost")));"

"locahost" -> "localhost".

" * so to support order balancing we should do dome smart work here."

"dome" should be "some"?

I don't see any tests covering this new functionality. It seems that this
is significant enough functionality to warrant some sort of tests, but I
don't know exactly what those would/should be.

Thank you once again for this fixes.
Updated patch is attached.

Concerning testing: I do not think that connection pooler needs some kind
of special tests.
The idea of built-in connection pooler is that it should be able to handle
all requests normal postgres can do.
I have added to regression tests extra path with enabled connection
proxies.
Unfortunately, pg_regress is altering some session variables, so it
backend becomes tainted and so
pooling is not actually used (but communication through proxy is tested).

It is also possible to run pg_bench with different number of connections
though connection pooler.

Thanks,
Ryan

Attachments:

pg_connpool_connection_proxies_two.pngimage/png; name=pg_connpool_connection_proxies_two.pngDownload
�PNG


IHDR��PaT\	pHYs%%IR$�tIME�;KoAKtEXtAuthor���HtEXtDescription	!#
tEXtCopyright��:tEXtCreation time5�		tEXtSoftware]p�:tEXtDisclaimer����tEXtWarning���tEXtSource����tEXtComment����tEXtTitle���' IDATx���}|[w}��� a����a�NR!��4�T6���I���J���-���
6�f����D���X��+%RI.�-�;�����MhM����,:��7��qdI����td��|<�hu��s�GGG�>�����lV
dddDk��q:���7;\������ov�~�������p�f��7;\��i���"�9$��	�]w���Q'b���Z�*��K���^i#9���$��	�t�I:u�$�:A�@� I�N,�l\�t���x
��%k���T(]L��������	�t�I:u�$�:A�@� I�N��P'H�8&�H��r)�H8
���r); �}�N��E���z�F�p������C$�@��F�r�\�$)������������[h_H�zn������������))L������F:������e���].�K�L�����g�|3?�j-�H�0utt�������c!0?��gU5����.�b���I:P'�����f��PH�dF�vI:u��$)�����d2iI��/Z�������V,�����e�����!I������|�[[[�7�2���ZZZ�J�,�7?����|�,f���r��N��)���x<��R)%�I�������@$��"�v���eF�������@Q���1�|�Z������1f�]�+���0I��z���K�6l� I
�f������1[��z8B"�P,S*����}}}������k��������l]$�v�!�266����V2��������jz�zR�3���c���|�f����s:�)���)��|>�B!%���u�n��~�<�D�� I3�:q�D��x�_�J�`9rD�
�l[�<(���Hr���+����?�������|�l����H��8��s$M��1���)�L:�rH��E��^��ggg���e7f���|{{����-u��k������k���e���l��9&�����k>������]?�Z��^���	����8q��o��5<<,���+��R�dB"M� vuu�����{�^���H���`���J��WF�N�e��Z/�q��s3����^4�M���{�\���-������Jn������,d7GE��5�H��N���U�b�v�z�u��'F{{{�{n����=R���9����j^���>�>3�I�J�WzM����d��K���|^I���|�*}&������n>�p8��w.��0���t���K��y�������J�+�������<���r���T|�85���������+%�ap@=��x����6��P(���5�R�J�����H�h{$�J��R)�q��������h�����cn+�(���YI�x<^����c����-%)�����{<��c��'
Y�S��0���f>���Kc�{����+�J�������Qxl3�J��]��'��X�����y������S��(U��(������h>���jw����c��]����a�xv���QK�{�<���g����r�+}?���5,<��������7���t�j������?�]��$��cy-�mv�}v����v������>�������+v�f�s��7�<w����=X�����i�9T����u,��������O��#��0�k2Sv��lvv���Q���uj�Iz��4�-�8�����&���4�6I�c�%����}�*��.�zv�W����j��B����y����.��$��\�Rv	Z9������e��b�/�vq�{?�n�&)�6I������v���1�T�R.�-�s���4��$��^��{�T�kb����V������;G5	]6k}T�yU�{�.S�>��������]���J��b�������:N�}U�>����m��V�}����?�����v?H�>�J���4���;�����������(��NfcN
�J����t��������>��?������iS)7.��u||�v�T�����������'���o_Q�t�|>y<���t�9�����fW�J�Uk6��O�<Y��T���~�eB�j�E������f��x<�.�s=K~��~����l�(��:�>C���6o�l���"nv�j|�ymf�����4[����	��Q���v������1�+VLY^���\���jh����m�3��|v���P��9:���9�Z,SKK��~���v��O�nw��C~�Z����������L&���������H���p��j��6g������)����3�'WklllZ�g�t|�t'�iii�j�q9���a�����;44�_vo&
�(�����/���'!3���Z�?l2��j&&��1���K<(�o�~�6�WJ��e^��Hp�Y�x��s�s~;��T��s6?L�y����{{{�efd��0����~���]���K]:W@����~=2���~+��N����a��������,�o��b��:����D��c���n1��,V$�@���S~Y�F�
��5V����0��nKq�z���L�����L 8��Lm&I����rhh��U�����������j�����=�
���_v��0�I���T?��*7��|J&�2#�����.�g���U�G$�_�j?������{.L���j_7s�;���`0X�>�P�l�w!s�Es���&z,��y&�2�?����<�t����M�������[��������r6�.�{��������s�\�B>�O~�_�����o�<���$3���}�d������^����A����7o��o��������������}.�ei������M���5}��wJ�H�'f�f�j���9�c>��3�(M��9�]+f5?"�{�d������j��c&��d����bs2|������������b����x���43����J���~��?D���{{{��P*���s��1��^3j�$X����-_.����(�Z�.��nw�B�w����0����3av�,L������h����v+
���������h�ym�o&��$���2C�X�h9���Qn��Rf�_������_Rc�X������}}}�x<jii�\�p8�oY2���x��3��P�H$�X,fi�joo�o3��X,fY���u�z�U
���������-������#�0,�r0,�j��T�>2���x|^��ke��jZ���H$�d2Y��'��50���-�---�x<�/�F��W��G�}T����S�|N����`���R.!/]ZS�nX�t�W��v�����-�����O���onK$�{f>l����^����@s5���#I���A


Y��Uji�����������7�������u�����y<��}��u��J�RE�<�H$b9���PQ�X��Z__�"��e������Z�|>�%&�/��� �>O)������0i��[���V<�?��?966f{���q�n�;�U�tM��.�����[�x�2.���������S(*z.,J���4������1�B���_5����?$���H���pp�������0�y��*(
Y�%��o��l�pU�%���[�H��u����'�����I�P�&�3���2�?�GR��N��D;�0��s�-�����D���V��0���D5�+3�.���������|>����ri||<� ���.*�����G&���`k7�F�gy���P�+k�MgttT�V�r"h(��%��^Q4U���c�U�D"e[�����:�?���������KFFF,��.]Z���!�cX�Z�TVn�x5���CDJ{=���������K��Yc�0::Z�x�!�k�3����?^�����\��}ggg~����)N9p����r`v
�r���C���f=�;�������n�����P�LTP�px������f���_�P���:��tR���@� I�N��P'H��$��	�t�I:u�$�:A�@� I�N��P'H��$��	�t�I:u�$�:A�@� I�N��P'H��KFFF,�.]�@(���z�g��9?v4���5888��,N�y����Y��?�p8,��U�/�H(�N[�{����J��<��0��b��a�K�.IJ&����Va>Nww������l6�l6�x<.I2#�-���������p8�X,�/7C���5>��H=��o�B�����v���k��}�v���������~���?�w���jiiQ:�������PQB�v�5666���9�X,���|ikkS����8��@������^�WCCCES���e��$%	�\.IR0���R8��W:���,�H���Z��*-/�P�����]����cd2I��+�:���\.��x^��rL7n�a��P-�t
9rD�d����)�0�~����Jo|��r9�w��m�Ur4���x�0�`0�L&�@ �.����f���')�\���(�J��*J��af���/J��������x,]����������|E�K���x�r�400���L&�F%��Ax��|����������S�����H�(����v�%�(�lii��d��Qz���g�{L��������O�%i��
�*'�����x��=��bE�
�tvv*�L����E�0_������f����p'N�(g�������x�3�������%�Z��{D�e&w�����j����j�r�
�l��*�X����a�1�t��x��ghhH�X��+y0�j_�:��������NGG����x2000o���������T�M��vH�
�pv���b;f�~������/�"���i���b���L���D"��
��B�l��dRW^y������l6�H$��og����`�!I�	�����R6���c��b�����"�������������t�����X,�o�.sn������;�cCa���uT�$�22�������������U�(o���R���X�x���*�L%��D�h��J�1����6�rs
L���L�g;�@���c&��8�����$�2�n���x�Xj���be&��$y�K�
*�L��uuuY���g�zzz�r��3��|>�R�|�ps��JC
�����
�:7��K������>�	�R�T>��������������M.@5\Y�o���Z�j���H8���P�,���e2�	�J{{��n��~�4���{�����X,V���+��R---�G&�Q2�T*���s/�����n�M��������r���8p��eMw�EwwR���@� I�N��P'H��$��	�t�I:u�$�:�ddd��q����@�)����Y��Ratt�����
�r��P'H��$��	�t�I:u�$�:A�@� I�N��P'H�u%�H��r���Z�����i#�{K��B�`P~�_ccc��l6�@D�CK:�n��a�B!���9�#H�u!�N+�i���N���t@]���T$���.[�p<z4�at�A�p\"��$uww��e���_<WOO��K�I�����~ ���W���fw8�\F����R�r���,en��v����t��|>_QW�l6�H$"���l6k��KR&�)ZK}1 I�=���L&��F�L&����`Ts��������%��S��0�)g�_�\�l6[�qttT�V�r"Fi�Mww��������K�:
��0/_�f�K����Z�@�*����@�X�����i���(
�����N4UOO��~��<^"�P0�?�D"����V�j�hL�^�-�Cp�w��!4<�A�A�aQ&��pX�Xl�:�L&��Wb&�����L&?���dW�S�1�m�uwO$��b2���^�p8�H$R��z���
�B��n�[�PH���U���������@ �l6;e��H$�L&�j��d22C���E�[[[e�2�L�:>�`�c�(��W����{��)I��+��>�T�����x�b,&c1
��Wz��k���pIz4���W p:�9311���	������&��=�q6���q�y�=r���������K/u:����IMMMN��(4T�nNg����)��\���8j��UN��A�C���!4�Z�j;������{����o��'>��3�j��^m05T�~��I��P��r)�J���monn�$�<y�h���p�{�:������W��MW�����w�����/��f�u��X��M��T��/
���+��Zt)�@{<�8q�h���p~��Ju�9��>�Y����Z������\K��k�$�Z�DB.�K�tZ���������,��tZ�XL����}*������?<�;���������W^������6{�����'	����X,�X,&������i�����������n��x�h��Ju�9��w�w�I���-����/��/k*1���<.�N���%�8
�������+��fK7���6��^�W]]]U����k���$&
��~r:�{�{�i����N��{p��'t�����?�X��_��m�v��k/�����`1��%������|�n&��s��\������tw/�H$��zI��l�_��wu�&�_[�������I�Q,+
����h��={������D"��b�q^���t:3��_��G�q�>�@T/��/�������q�'t��>��u�
�#s����w[����%q��qc~��b��I��6)]}p�~������U�t��]z��������T$�,SmN��r�����<b�	I:�o�������m���v��o��qD��D"��5��c�t�~���>�W�������+���M��y����0]�`P�T��0G�`A��������e_zS�6���a��1�����\.����$���Ee�N��������XP�����>�W������<�u��]z�/r 2����S����hT����%�4>>^T�����x<�1�SX0�#���/v�&�������}$��Tgg�b�X��=�����G]]]G6�hIP�^����5��j������/=C�m����7;j%���E]�#����hn����e���K�.�����������}����7��O����a>uww[p�m�Aa^�d��5�
�����l�U*��>�%��;���\Q����W����@�y�����C{��G���#����M���W������E���t�k����e�
���};jP;$����^���>�Wo6�����B������������!I��?����9�W�����R6x�;t��]����:P[$�����K;w�g�s���y�����]z�
ow 0�$���LJ;wJ6�K=��D������x���!IP{�]'}���E��a�_e��I:��y��\�y�:9����
��m���op 0�>�����~V����h��+���q@@�y����t:-���p��Fw�\E�L����D�>�ht�u�9������{�c���s��/�B����L����ZZZl��������f��f�����x�<^"�P0�a�f�2C===EIv�:�Xt�qi�Z�+_�����K��I�xG�����K���b����M���{{{%�Z�����U(����$��n�B!���W]��c���~%�B��WJ�e�����?/��?Ig��L|@�ZtIz P6��'��477OY��dd�Z[[������0e2��u|�����o~3�z~�����\��_�e���E��O��$I>������S��+Vm/e�� IDAT|\���O?]���p������t����S����
���}\��������(
9��LLLh�����Q�C@������4�������������9b)���5�Io�~~����B�������4��x�^��������y�J�R����QC'�����x<���s:�Y��7�s~�t�[�j��!���C��z�=�qbn�]���l�Z������zm��p6�h4���.uww��Am���h8s��>[D6I��J&�E���1���<y�h����f)Wg���UX0�~:����o-[�B��V����}\X����������9r��r����F�3�Wz��n�<�N�8Q�}xxX~���:�XP���Mg��o�����Pn�������D"����R��	z"�����/���������,��tZ�XL����}*���@��f�k�����\Kz���D��N������v&>,����x<���.�+��F�n�,�����	�b1�b1y<�����Eoii)��,/dvu������x����TS��cu��s���x�Zv���-�H�}��������d2�\8�<�O��_iW��l���������	�����������H$R�����WT�.��T��cu�o�Vz�[��n���H�1k����|)���r)�H��
�������q��k�%���H$��z�/1��?�[�����ek��Z��[�K�@@�P(���b�Pn����e���K�>�v��wH;wJ������uo��d��z�c�
��%k���T�e<���D.	���e�~u.9���}\X��^�����m�{z�^


�[�������R��c1����|�M`���\��h-���\����_����
X&�.�l����h"nIU-��������K��g��_��\r�>&4��W4�{������;�$ht�t������e�6�����}\@bvw��}��RK�}�~�����$�@
��4���\�y2i-[�&�z��V���GK:�h>�9i�:�=��p-�@�x��\���h-{�+s����I�3hI����k=�K����\�9	:�8�t`���[z�����k�g?+8 57�>.tw���uo�akYKK�{��u��@Y���Q4*����	�u�I��$�@�%XL2�\��}�Y�^��\������@UhI���\��]��W����k���O�k=����-_�k=��?�}\���t`!���\��]���w�Z�I���$X�>�Qi�V�0�e�w�W�*�\Y�����M����\.�����,�H��r����h��U�O�:39/`q���q��X���?��������-���DQ����v��B�(����a�b1��D"�`0(�0�v���d��x$I���3��R��������������G���k0G��������Q4U0��
�v��N�����T*%��'Ir�\����>���k��%=�H(��0�|\���W�PHn�[��v�
����_����S��L���8!����	������XwwwQfGG�$���#��={�����tI�D"eh�E��e��|2\(���0���momm�a�d23��R�|p�����;79�?����/�"79��-���G�N��$�X�B��L&���VTg����r���E������T�x��T����OO���~����j����e�rK�uv�<,�:;;��x����
�+K&Blnn����;��zH���4����������_�����c&]K:�(
�Z����[s��$�Xd�����������9=�������Ny�^�[�NW]u�m����~m��U�x�+j�R�ddd��q���5�711���	���,���:� ��=���=���{uv�q���������_����:�3Wo��t555�I����`��������6l�0�cK��m�411��|�#z�����k���;11��|�;�����w�������V4~.��K��Yc���o�r��'O�,�n>��>�}��Y�~���;s��p�O��q�V�r:4�}�=X��4����������������[n�+��&������ �
���7��~����:���j�����U�555i��-��e�n���iD>}�yyC�Iw���x<�vxxX~���T�3�����r�����]{�t�M���3~�_���E�>,��Su�]M�c��i��eEuk�(Z7c����~����r�
���pxN�rwuu���'?� �N+���`<����wP�>��Ts4�S������07�;	:���-�m�L{�����H.���d2������k���d2
��������u���x�Fg�lf�.Z�'&&�s�N


I�>������#�D"���8�;vTu�D"�`0�������x466���n�������,�2�LT�O�:�9/D"��������{_�u���j�������m+����N�8QT'���>]7�t�6m����&e2}����w��������u��o�����?~\^�7�����_}}}����'>�	�{��U�\'���pV�������D���W�H��{C�}��S�1�~�k)��Ak�����ft��.t4��g��Js�������$�R�%�~���_�^;v���-[���u���Y&��%���YG������'�|R�<��.���|�Y5�pJ$�z��~�[�Z�mVY���k=g1�&�����}H�_~�>��O���/_6666�u�g�.�t����~����u��\����&MLL�������^[�X�-�����H���}��7J�XM�����5>>���~�B�|.x�u������H��\�����K*�9���o������x��\���C��7�1�z�w^������Z�M7�x����E�n.�e����MMMz��t���)��V__.A��o�e���wo�c�711Qv�����k>q\]$����-��bI�%��'��]w��D"�@d������{�����5���?���� ���c��i��}�Y��T*U����&��/###���(Z������r�����G?������J|���O}Jg�}�:T7��M��?^��]��x��|�r}���@���3���Z�4�z~�U��	����{N]]]u��K'�>�O�TJ###���>��o���|��euu�����s	��c����-���^]���������^m���nzp;>&������Yg�y&��`a��G�����>�	�o�����J �W��U������/��wtt�|6��t)�N��������������Z�x�Zv���������q������{~;����G-�~���!�G�.I�DB�`�����kpp��S���>������]}u.Aw�j���u�]Z�~���^(��(��+��5�\c)onnv *��3��Z���%k��^�K�m���O�_~y�$�R�$��N��$�B@}��Ws��'OZ�.�<����U����������������J�^�Z��/W:�f�8��X*Y�HR�K������XP^x�y<�{����a������1'�kjjR__�:;;u�-���3�(*onn���$I��Z�}�Z��������}\f��'�PSSS�V���O�8�:I���t~���[�Z��8�������k�������j�����a���T*����d��u���o�Y���.����W+�J�-_�lY
�@�����5�^�k=��j����auuu���;����E�a���f�N�Xu��7551��_�%���?�����n�EZ���q�\___����w+����#s6����]�����K/���dq�tZ���7�����}�&y�������|�LLL��������y�f���)������;j�/�������55�Z�m����1/Y__�>���+��g�������U[[[����>���{N�dR��~�z����k�^��/Y�f����������m��ibbB�7oV&���{Q���^�y��
5�	
��_�%�����?��\��k^S���Ff.�]:������$�w�������7��o�Q�tZ����u��A%��|��v�u�w��K/��$��yy]tw?v��������|G����={�e;w��]w�5�Iz&����)�V:��N�$�c!*�W[��5�'��>���{�~���$���������@ P�N$7�D�������f��|z��H�i��^��x�3�8C�=�������(�(��*��*
Y��R^�W�7o.����J���@m��������t�Q����-�F�8"�0�)W�
�������ad��.����f9rD���n�mN���iIRGGG~��W^)I�n�������������O$��9j����t��.�����~�cR*%��������b���+��j����"Iw��
�B�����o�>���)����.�=��3��577K��9��v���|v�����x���n�<����+�Ws��?x�G���=�.��V��-�V��&�"�H�������3���^o
"����.��?��3��~����]q������v��9;���V$Q0�'��~�)����*�W[����}C����������nr������������mt������|�g4U2�������������Z�$�Rn�-�������$�B!�iX�^��_��C{����-e?�����M���78�B���`0?!�)�����kF�Oe��m�G>�����^��Iz�Rk�Xq;���sz����l6���=�-���'&&l��c�pb�C�� ��=��r��C���^������o{|�}�.=}��2�A8l�655��Ez�I���i��D"������|�3�����~�z]{�����Kkaw4I���rS���������+�`��F�2#����n�!���h4j����n


Y�������T^�1fj.�����8n��UN������ �������;8�E��;[�����-u���`�����=���k�j����u�����p8�g�}VW\q����w��r��p4I����$�^�z�q��-��s������m3'p+���U�X�h[&��a��q�V�\9ey5���8���t���Zs�	K��W����v��?X�@d���n�;�����S��s������a�p�
5iutvw�"455����o.'�kmmU2�,�^o.�f&�^�W����r��������#��W���c`����=���v���m����$�,�;����N^�W[�n�����������������t������XK�Tc�K����@ �'NX���N�_z>�K����x466Vuy�u0}�<�_���^]2f]��Gg6��M��7����;���|&��]q���}{Q>�v���Lpnr$IO��S�A/5�c���'0[������t��SN,W���:����C���^-�����5[t��]��K~�����C�i���z�;��-[�8�$���Jc�K����Jz{{g��jc�o~�����������K_���v)�f��X(����L&��rl�$�����	?~����L&���O����J�~P�.���u���z�3�a);�Go���v��Y�q 2��c��i�&�}��E-��pX^x�n�����������m�Yg�U6I?}����[g/���y�s����m��Z��=�Ym�`�y��
���}}}�x<�����X�>11�d2���l��k���+��7��
�t������>�W|���q�U���m���W��@d�L&���6�v31?u�T��t��`3�������.�#�<R��P�������N����]��s$����/T�&&&���y���r+���c-��dp��?����
	u���M��C{�:z�R�_�^���v����U����?�a�b1�{����a��\���G��c��#��9���v�Z-_�\7�tS�:�LF�XL�]vY
#@=x�S���_��M������ A0+MMM��~�O}�S�m�G�x��I�$�'������u�Ve2���S����v��N�522������$utt8&j�E���������b)���_���v�����@d���&���C;v�p:I'�[�l�}����;wj�V��~�_�HDMMMD�Z[�����C{u�g,e������K'��t 2�U&���#Gt��	KYGGG���n��e��l��o=����r�J�^������Y��������m�?^�W�w��[�������a����q��i��Ls	�B�������u��}�Y�r��?�c��������K��<���2k
�'�]���u'�+��<W�������X��������������"�0���[ZZ�w��%I3N�?��O�����{�qd�8Sa^�d��5�
�����
����t���>r�^�������������D�	��������t����n���G~���q7����g�I�s�=��/���]�
�r����q�?�K��olXg��}�����B�o����o�mp�Ji��L&�D��m��QR��Lzfwvv���W��o����H�0�.��������=3e�C�����K����E�^%	I��
���&�\��x�����Y�c���Z�|��m����6K���#I��c�����Gccc��_���[�s��Gu��u���j��D�����,��������� :Aoo��~��nw>I�k===:z���\K}��7�|�4G����Ok���E�y�:Q�7�xD��
���a���?�X�����oy[�������X(�����7������y=�L8���^�Z�a������jbbB�a�������^H+;@Z�����W�����O���|b���D`��F���bJ�R�m������O�:%i����\�zX��$���IG��<���^�������g�Y��P��|L����?�t��?����[��(}n�r_R�,T�DB===����������E�>,��3������:}����4�.{4���+f|��rl�8���;vH�����gO]v5�t�����m���^�q��?����%��V}�S����t:�`0�x<n;���S�`PW^y�|>�2��zzz,3�OG"�P,�}������>����C���o�$�����'��LF�N���e����@�:����o�A]26�?x�G���K~W����[���[��D`1��g�$)*�e�Y�8qB--��/�HD���3>�������nm��E��rK~�����
f���l�E�.����������xt�-�h��-s~�L&#��S�-��N�����ae�W*���S��z��]����?U�����,�������-5��bUM�����Y%�v�8����W��/r��6����kllL�x\�TJ�TJ������[�N���\�DB�G�TJ�l6�7�����7���B!�\�����Ps����j�O���+�7R�#�)�_,}���z������w]3���?N�`Ajkk�m��f��H$t��Q�^���1�EK��c�t��Q��>�O�=�����7�]z{{��s��H$d�v�����{�n�b1%	I��<T<�c@{�!��{s?���[��/�������vgo�o^����	�i�������^��y�}�C���>+�0�D���T���"I?}��$�&�mmm:x����+�N�0m����}���-3��ny<
K����@��1H���;rd21���������ss���_T_`�455ippP��C���s����.�ON�����5����?>>���:k��u��II�M7��X,�����&��LF^�����V���:s���'���b�_�x�����[����5���d�+������Z7�.�t��-�����N���G+V��$���O�XL�x|��u��	IRkk����$����A�X���_Kfcbb�-�1 IDAT�I
07FGG�
�{N��[/}�I������W�dl�b���%J�[�����%5���p�i�ljjr�k�l��a���Z����	�[�N�a�j
����$]����/i��mES�/_�\�Hd^��3(����2I_�o�I?q:��Z������!�`=��,=��d���H��/~�t�e�e��u�e��������q�>Q?���N�j�$�`��.��Q���+W�n/]���������e�������$�by5�����L��x�r}�+������������P{{{~���e�-_��qgw/�v����
�+}���L&#�0�&����E�����q�V�\9ey5����c���w�[�>����_>����s�N������_�7o�9��ST~�y����/n���k������h��=���DN����z��z��fw�p8�����x��~��j�`��'&���n�w�k21?�����-[�h��-z���������	��i�$]�u1w�\r�\�r]���lQ���|�0��x����Gc�T*���$��''���n�w�s21?�����o���,�&I7��������H$dF����v���SMy�u@���&�G�n�w�c21_�|~��EfbbB���544$�0,��?�x�[�K�'&&t��q�|>������������eG~����U*���y@�:~|21?z��}�n�L��>{~��E���RWWW���������.��iVK�%���WKKK���������/�P�>��~�a���o����k:X���`�=��db~�Hu�������+^1��@����{u�-�h��-:x�����J�|>m��]��{�n������d�f��K����}>�����;��zH���g������q=���:z�h>�����Vv�itt21��w����}21�+�7>hPgL���/H�-�a��|���V��/Y�f�������aZ�v�N�>-�����nI*����dt�����0g��&�o��}6o�L�_�����e�]�}��������.���_�3�8C/������e���R����&g�q��1E�Q���OJ���zn�������aH7�$]r�t��Rww���K��>�G?��I���H�4�L&#���������D"�JW��]Y���e2e2uttH�ZZZ�u�V�B��NnnrlL��e�t��������(��.M�Qokk���+�k�������j�,��m�-����,�hT===S���������V?���J��Z�l�c�������nuwwkppP7�|��~����R������}��jkk���>�'�x��0���K7�,��Jn�t�����6��[��'��W���I������zzz�J��������i����n����X'}����D"�r���Y'N���Q�'NL��?�`u�����-��}����9�x���$����h�5�������������[u���-������4�����H$��w�"I�+?��db~�`u���L&���3��@#
Es�%	�AIZT�:I:�$�:5��������M&��G��H ����������_w�uz��G�1��!I���?�L�x��}6n�L�=���0%������i�s�����3��X���M�L�I���OO&�CC����7O&�^����Z&��w������7I:X�~�����`=�)]|�db�j�������jhhHn�[RnM�d2�T*�pds�$,N������7�Q�>6L&��{���(�N����R����=�����K��aF�a�����$,�<3���w_u��_?���^=�����Nzww������t��=��db���W���N&�ox����4�������'��}��}���������7>f�$,/�0����?W���u������o|��tP�~������{�����x21_�v~�`�5|�n��J��������zeF�q������@C�����j��<����|�����y�"�pR8��������������f��f
��	~5����!�����}���Kg�)��}�W�2u��f���+=�����J��	:`�k�$=�N+�����b�D"!�0�{���6���D��j�@�y���M��W}����8C��M:p@��o����7H�]'=*=��t�
��.h�Y�vw���T$������<��nw~����������$MY*#��3�~-�����ow���m)k;�+�sZ��q�
���u7��[S@�G�Z��9�hi]��C�3
�@)zb���z+ME����������W���������������?�3���HY$��<�3����G~�����x�K��a��b��k��w�?t�����?�Q���G���|^���ddYV���e)���;m��7��~4����`��e�J�5��Y��������/�.����f5??�w-���������&,..�]�s��������z��_7���#G��O~�o~�w��rm9�':�� ������������.�/,,,�4�PJg��i�R��wr�%����������~�������������m+V����=��3��~_��G����Gz���O������z������s�U��FGGk:��oq�377'�4U(<��x\�DB/^��L$���\M{�T��S�������}��b����/W����O���=���H����0��$I?���v�	@�������^*�<�-��a�>'}ll�&�[�%�4u��I���u�7�z���y}�j�|���
�����S1�����?�@����>�m=�hT����{����t�mK��2C�X���f�@���,����k��?J�_�������u�����O~���'?�w�������?��t�Hz�*�&I�i�0w4�0��|��������������a�����o�_�upu���1������
����\o9r�XT<w��R)MMMu���
|H�D"�m�}?33#�4k�GW�k��7�����uU�W7���a��ww�����D����~��U>�W6����	��K�C��$�UP|H�699�W�g�'~�?�`����������
��;���������s���l?w����g�p.�S6�%��3���������{�^����%��]����P3�V����G�����<�w;P- Hb���3�ggg���<m�M���b�����@�����jw_�a���u����������T���J�#�����b�gY�$){�����(�����m����+�u�U�������������-�4��@������U�����oZv���������#?n�~�����~s�u�X3��������T:����������V����@��@�9��.����[�nI��i��t:�4�k��/�X}]��?����]O��wv��*�Vy�7�?���e��������<mW�^�an���t���K7�{�����'5�w��an���W��jd����z�����@�9s��&&&��'�(���,e���;��:B:���q����u�5k�����F�_�q'������P,��=m�PH�d���������O.�S&��h���Ov�q�^�0���0�����WCx�����j�1�V���.I�L��By5B:�n��?E���iZ����1������c������wl�1@O"���W���vMZi��������@����������g�������_�&}�m������Qq��{��� �����i�Q�/��^�h�1���;&}�{�9<B:�w��_T�Tj�1v��?*~�X���B:����]T������w���9��cl!������5����cd�vT��q���Zw�!������7p����#�?~\���u�h���������AJ��_������k����9����G����mk�1:�2�����tX\\�d=�����f���TJSSS
?�Fe������
mo�t�����e������G���B���T���Mw/�:����������9�L&������N�>���t:�P(�����f�@�X^�?*^.��CC�G��7[w�����T�xTO,S"���K���3332MSg��u���=�B����IZw{2�l��d2��?*4���;��l�T����dx�����p������D"�,k���/_�a�D"������/K�����d�}�����7�w]��-)|���+��w];�}+��$��X�}�Zt��FH�T(�����nY���hM���h{�}`�^~��W�+���{�u��u
�h���B��Qq��;Zs��
|H������?���J�y;�=��W�	��V����-\/�}{�)���K[���8p:��Ee�YMOO{����������]6�����7�������Kn0?��v���h�.
���a�Y$���y8\���Z�� ��9���9844����M�cffF5������b��w/	lH�,K�x\�T��z�H$������R��S�NIR�����M��/����]���=�w	��<��h�bj��5��<���c�{{H�}�U�wD���������������������#���Z�u�p�o��X��_6���������T(<m�e�4M�<yR�px����@�����<����h��;��.�_o��e����7�0�����z�}O����dH�F�J$k�h4�h4��/J����&&&�N�=�87��v�h{3���v=}�*t{�~'�?j���*8�����q���A��zU�B���r�4
�<��������k�M��a�h�a���7��l�e���j|�*��Y}�Y�<���n��HE0?�[����~��i�\N�L��j:/p!=�L���	�����Hd����7�@�}��t��~l]q���~��������v��r����;�����:�U�H.HA=p!����I����]�V�m�\�n�X�o��i�_���m.����uw�-�>��]�^_�^�+�L���:�<!=��~��jCwu���e<�V��V����p���H$����t���]/��uG������#�ww��{;R@�X��h4�wEH��G������^�h��K��������W�������p�hTsss�M����fgg����=n�\-��Jg�x�=��!)�����%I���^�L=���O?�a�6�4k�����6��o��������m[m��~��g���L&��������[�������q���!<^%�!����_^.K�������Z{
������u]��������!��rg�x���C��pg�z!�%O����|��i��\{
���sg���!�&w��C���;S����!|d�3u3���P�88�3��V|�\���G�_�_��{���<i![���<�v�j$I��|`tt�����b'��������"�{��;��m����;��?t�3uhJe.g�;�����yB����a|�����e�u����gO����t@��'���hY�~s�3������l�Pgj�q�t������?�_tx�W9����Z�}P���������_
�����:t'B:�w��-���z�
��|y�A�}��2�����������h�/�7����w���Ua��m���J!��|F���?������������[��].�>GHG���~[������$��@�033�P(������%@�(����N��.��I����MLL�4ME"Y�%�0$I�L����_�bQ�x\�����b�����OMM�YZG1��!���J�R�D"��H$�T*�����\����sJ$n@��\.�B��cU�GH���d�����<�ccc2MS�e�Tt���Y�:u��v��II���AAH��[�nI�FFF<��� ����p8�i��_���8����hee��2����~�K������%^��C�A�qr��s�s�o���}

ihh��2�!��)7o���;�����)�B�eN&*���vg�p���3��"����9�._��D"�SU�=�����<mW�^�a���%�����O��f�"�E
�9s�����9sF�B����eY�f����O}������!�LFKKK2�m���V2���*��dR�rY�x�m��r�d2>V�y!�������E=z��z����tw�!�.AH�K,,,�4�P
�S��FGGk:,..v��2�3��.AH�K���t�!�.AHG`�B!���,w[>���^���h4Z�s��b�Xw��������Y��P(���qO���L�s�X,z�U���t��������{��|>��������<m�����X4�l_o�F��BHG ���kzzZ�m��m%	���c���yE"�*F�)�J5�W.��a���4MO�d2�c��u�|��ZW�����b��b��x<���yw{�P ��)333�D"��3==�l6��"��,����\*�<!��>�z��������k�Tr��B!�r9w[4���Z������e�f�o��������y��m;����a�\�F����l�~}N���sU����D"a'	����i{���}.��5�����^*�����������>�z*��W�������6�:3��@���3����?����X,V32�Q���:u��������T3-h�YB�%Isss:}�����'$I��y�}��8?k��^�tI�D�����z�j�}�V#�#�>��3I���i��F�LNN*�J�,�0�������W*����r�i�2�4u��O��kb3}����e�Qso����R���T�L��>@��H�lV�T�}��dj�
OLL���bQ�i��O>q��QO�5??��_�U2���s�z��{��N��`4;;+�4�.	���$�4599��7^<����j�t����������}2�����:W�g?���Xwr,S*�R�P�`e�d2�D"�����]
���������a�� tT��[g��%����t:���Y���5��];��e�P(���s
�VO���rJ��u�\���u��$��c�b��r���������������n���4�XK������h4ZwD�4Mw�O3}�V#�#0���
��L�lj=�Zk���p���������X����,���D"Q��^�Z��@�M�>}�&�8����g3}����a���9����###:u���ql���6�h�fn�:�q-k=*���B*���.hIv*��iO$���9g���;Y�T�G�����9��w:m�iz��'�Hx�a������>���f���>_���X�z�4�0<��f��U��	��0lI5/�"�����j�����V�����'��{���P^������~t4�	7��?���z���>�Z����p]}�����lFu���mW��/..����o64�R��Y�@� ��%�t	B:]��@�XXX�i����2�����tX\\�d=Ve.g�;]��@� ��%�t	B:]��@����Q(R(R>��}D��7�,�<B:>(�
�B�����M����	MOO��me2�5�����a~�@��833���@�������2�}_*���x�bQ�t���u�9��m�}]�ti� >22��u��h=k����\.+���M�ly��N�e�.^��i�~���b�m��5/F���P(�l6���Yw��$���5G������2���i��qI�a
�B�,�f�BA���k�<F��IDAT�N��
��Q��6�5>>^�g+��F�n����L�T6��|�z��� �t�m�J$��r�m[���n{2���?33�x<���yw�z4u�������4e��"��gN���h�eY2�]����W�x<�.��mMOOkbbB�e�#��a�vg����n�m�J��o\'����*��W�O�v���R)�b1�mjjJ�in��n���oV���>�L�D������I���zF�����Z�������6M��&�)��;B:>�\�^�>��������<m�HD�a�\.o���n��x��\�t�3E?
���iTg.����S��GH�����g
z���vqF�o������R)����W�����d2�2�0j���t:(��x��W�O��0]�|��fY�L����'�:n$Q"������H$����7�|3.^������)�!���3���;�B����z:�V"���SodjjJ���57eK��M�m?{��L��<��X,nx�;�jiii�}\�zU�6=:@/�9�t�F����|�r�w"�����"��l�V(R�Pp�s�\������i�2C�lV��(�T�P������*
J�R�����S�<��M���>�!������E=z��z����tw�!�.AH�K��5����>�@�T�����������������Lw�K���t�!�.AH�K���t�!�.AH�K���t�!�.AH�K���t�!�.AH�K��kmX\\�d^��m��":iaaA���~�����m������9|�6�����������os��m���	�����t��!�+�����IEND�B`�
pg_connpool_connection_proxies_zero.pngimage/png; name=pg_connpool_connection_proxies_zero.pngDownload
�PNG


IHDR���W~�	pHYs%%IR$�tIME�/Q��6tEXtAuthor���HtEXtDescription	!#
tEXtCopyright��:tEXtCreation time5�		tEXtSoftware]p�:tEXtDisclaimer����tEXtWarning���tEXtSource����tEXtComment����tEXtTitle���' IDATx���t[�}���i&e�.�2*%�MA���0W��y������uM�=NuR���n*%.K+�g�N�D���q=�!E����ra��D�u,�j��������hSG��������$�������s/����}?���z��UU�x<.��mw�������������������������������_���;��k�^�z���v����v�Z��e���5�2�&����,���Z�|��q����q��en���������h��
H��	96 !�$������`rl@B�
H��	96 !�$�����v����;w����b1�b1���H�@@~�_�p8gy8V"��)�\�DB~�_�@��P���1��a���������)~�_===v��n�������[��>�Ommm�2����DNy}}�6l��������.�,�H��������������;v��r��{����t�~��N����?�-�9V__�/|�S�#IO=�����F�L&%IUUUr:�jhhPss���{���bz��'$I}}}6G#������9������|D^����~���R����y�f�#Y����P�x<�{3Si���*�>���(�*�HT�=0�H��$\��r:�9�FFF411aY�p8��'&&�p8���d.�D"
�Bz��W�w�^3)O$z���511!������R)3�9r��<��������v��G�;��3����*�M|Ky`2ccc:p��FGG%e:�X�B�TJ�x\�x\�������� o��jmm�)���TUU���j�����;�H�y��d^������m�R[[+)�k���1
��v����������s���BWi�����b:|�p�}P2�T4����<�S�����h4�3g��w�%��KBcc�s�uuu)�L������y����fn{��I�����������b1=���s�^��N:+���P:�V0��M�,-/���C�4::���z=���97FFm���__��*���fs�\��ba&6n�����x�`������������l/���&&&���`w(�T��]�vi��]e�fq��kJ)�{�9MLL���-�o������O?���Ns��������Dha�%�>��l��E������N�%I�y�{r�mll�v�����������N�<Y�}�A%�IUUUi����Z
�����N�-,^�W������������bK���_os$�����L&U__oy�v��wK���]������%�w��r`V�X1e����g��S������,5[F3�{��G'N�P4�$9�N�u�]y�p8����l���xt�}��8����a�:uJ�x\�����O���R������$K��b^K�$
���O}J����
��N�UUU��[�N�>(��U�n���������lii�����n��3���Ov���F�:}�t��	�u��i��:544����.�5��m���T{:����m����5����<��'O����fw���f���L�������^w�qG�Z�X,�^x!������sQ�B��B��$����j<���1�<y2���������4}�n����G��������9�Z���W��;��|��:��q����9&���f��G����C����NwM0��q��5����B���;��G�w�&�X������e>�������|��}��~�i�������C��L&-�U�������t��f|v3����n�y�������������K�6l�`)���S}}�FGG��rb���7���/�*r`������\�t:�L&u��qI*�"
)���JMMM���6�^������������?���j�|>������-}��_����D"z��GK}�����9}��\�b6q�x<r:�
�B9}�W�Ze�V0����wuu��Z�T*��������5::j�a3��d�)1
8�N555���+V �����5k
�t��iKB
�411!��m.;w������p�\�7�>�lQ�@{zz�Fs>�W_}UCCC�x�b�S1n�<O���;&###f?���w[n�&&&t����i�8G"�X�"o��R�������+��_�r%���������T]]m6�4���N�9��>��i?�������'Nhtt4�9�F��|>����N�g=�q���vP��3�c^�R��Z6�X�V����b����
]���R�������b�}�qH���{������|��]#J122�={����Z[�n5?��^���8�###3z��m��'t��1�'	�>}Z����f�|�Z��4����by�m���������z�k�T�	���,7Z����,V$�@����488(I9����r�������x�Hi||<oM�q�4����i&��xrn����^z)��>J�<�7n<���n8p��O��T*e�K�}�v���G�H�L6��&
Y��K���`0(���G}��\�~��ht���S����|����������s�a4��\#?66��_~YR��o�FGG-#�������r�-
�BS��O�����������t���O;�M(R4U}}��F���S<�� �����P�8&UUU��jc���{n��,;�X�j�t����7��G���|���n����2X^�$Y���f�����p8l&���.=����C������1=����F�
����;::ji����1��~����FGG�p8��(����1/E1��q�}������XWRQ	���d�\���R������������5���{��z�J&�J��E�
��Q�x<n��5�y�������������s(�����c�411Q�t���6�;�x���/��2�{��%U
|����i
1���7O�]a$�����eW`d[�n���=�@r��T*�@ �@ ��G�j��=����d���S;v�PUU�������UT����VuvvZj4��n�Se)�$zr-���%)�)g,S:�V}}}�MG]]����[0�R=��#�����:�^���/�h4���	y�^��b��7 ����sn�\.��N�&&&r�@6���y���X������X�0<<��������F�����V���N[�n�$�V�E"I�w�a�l���I�����}���)6�ISS������U��L.&{��sb5>���z+���=��g��8644hbb��;+�N��v[��z��N*�X&w���3��S�NY��w=1�|�{�+z�g��5�W]]]f-�<`���sg6����FMavb`�k4����z�F�.��P��\�5~tt4�ymL���k���>g����R8����VUUUitt4�\���{��=��������%e�C<OQ"�����,��^�W��l��m||���f3��
���H��W����8��*�e�x\��/�;�b����'�N�s�����V���**
)�L������cSI$z���������)g���`��M��k��b1����fS�T*U��F���Q����^k�'��t��%�8�m�$���o�������ySd4W/��w1�o����OOY�o0b�W�r�TUUU�����+K��8&��%)S��N��>@����*�*�8�;w���f�k_�xQ��7����:����7�,)MP���d?4r8�x<��K=wfs�KQ��1��&��8���wbb�r����k�L��Q������k���J&�:���<���L����5������8�GQ�u���:}���m�������`��~����v���6�T�R�2477k��Mf���^zi��|ll�<f����V��To���qC;�f:[1�g��r�������s=mR]]���p�f���m�&s:�����XQg��o���d�4]��rL�V(!-��q���&+����c^__�����s������\���<���3�LN�w?U���Vi��\������[544$IS>���8�&Wd�>�.�K�w��s�=�h4jv��j���j�*%��)k��}T�����C����:�q�
��=���f3����#���T���#��c�n��v��#{T�b7�$����������������Rkk����������T��F^~��Y7U-e�s���5F��T���+Y�����|��5���>jFf4������K>vg�����+�b�)���=��s�2��x<�w������T466����q�A����o��)+�. S%����F�q?Er`�]�rER��	����sV���1���K�����
�����Tkk�������|���
/��?�����������\}�k����1��d���H$�N��zc4��BE��~�:&�96���3e�'�Gc������qs=�k}x���l�R��J:��DB�HDN�S�V�R(�����N�v��]����E7�����&T�q6F�.��.��W�J:_�M�(�Y�}g����k�G��{�$����/�/�V{&�+�����Ny<MLLL;f�M7�$��W��t_����W_��%	sp�������
,$����b�����dS�H��6	{vR�H$�~�������e�����%����������7�n������p�d1�������%I/�������PWW'���t:m&���z�e�X,&�������%e�<S�M�Q����g��T�
�����p8�N�u��!K�^"�P 0k�o��I��/�h�����������t�~���������Y���B<�$#�X,o��Rj��nw�~�ccc�����n+���C��N%�K�.�}jy�s��Gy��&���K�Z-{M����=�W�$;�,/��g(������l�tZ===�x&O]YH��ollL��7���|>9�N���N�Yg3�|�&�Y���2�ommm�M�8GGG-�Qc��|�I�.]�T�.@9]s�59��B�u`�p�
J��
:~��Ycg$S�L������\inn����>�3����0�-kdd$o��e��|>������t:������5��{L�W�6�*w�\��e�����g�544�����/7�N���o�q-�����h4�x<n�����G�mhhP<�L#�m����s�w���7oY�T:9���!,4����{��t��a%�Iuww���^+V����������kll�9&MMM���6��:}����{/Ess�N�>�x<���544���Z�T������7�o���7�����7����SUU�����=*)S�:���=������3�)c��;v�>�R��N%s���S�N)����������z�����^�W���y{M�������@ �'�xBn���z��W�i���J����/����a�FFFr�J�����l�C����9���d�.P�R����>k��w�������n�����x<�>���c�9�}45[5�,��m��2����fc��3���|���t��Y�����K}}}��$���|�	,$��p�\joo���gs�������[����k�.UWW+�(
���JMMMr:�E��j0j�''�������V0��������u�Vm��]�*����_7_�i�����WO?�������;v��w���4�����s�=�x<n&E�]�v��t���1444h���E���/{p/c���X����t:����f�As��|f2�T�������g�9�'��w�N�<i�3R���<_m!���:x��N�<i��K�X��������c��]Z�j�N�>mR�p8�e��}��%= )�V$�q��/|�
�H�FLMMM�~��������n}}��l������������{����1��N��r�����N�8a�SR�F�����1�~��1pa���\�RL>?�����S#�d>|)v�l#�4i4{M��8Ou���|����/��d2i,�v�-�C�\��o�n&���s��y��JMM����~=��3�9�v�u��w�\[_����b�$�.�K>���~���\.<x�gf��k����t��W_5�1fZ(fD������b�

����X+���l���F�`)�����W/\���k��T��~X����n`~=zTCCCjii�$��DB���E
�����'N���
������)�_;w�Tuu��9bw(����L��������7}��H�XL������'l`��7j������U4`P~� ����Ws�M�&�P�o~�����[��	�4555)�����VMM����������w�Y�u�����x�F-����1�G%LW����4<<���*m����P�%���z��)
kbbB�O��������z���F�:s�	����2�����f���F�����P�!���`rl@B�
H��	96 !�$������`rl@B�
���q-_���8X������2���.�K���6��:6 !�$������`rl@B�
H��	9XT���~���nOO�����.`�Zfw��e��gl���_�l�500�w=���;��C���9�[ZZ���l�����d2in���y��~�`P�PHG�)z�p8���^����cd�;q��|>�������������9m��DBX������:(�L�$��pX����H0kkk��z����S���z��6P�h4����9]�����~���!!�'�^�u��Ey�^��i9�|����k%����h�]S����h4j���P#�J)�N��9�UB���-�_�}�!Kv��o����7����3�o"�Pww��}v|N�Sg��%!�	��0���h4j��������F��k����I��������Q__�����L&���l�Be�5���������,�F��k����t+����@��p(��FSS�%��/��tZ�g�������W�hT}}}jooW(R"��$;vL>���!�aAMM���t��
�b����`P�dR�7g��777k��}�F����s�����%��_��uf+��z��j���9	���,��F"��>��7oV2�4^��i����Q6y���Ny�^555����D"�8eK�Ryc����5k$�����z�p8t��%s}�&}2��1�>(M�K^���������2w�\f����wvv�|gW��|[�L�<_
�dS����K�������R�y�f����3r:�9�)[v7�R���K�$�]]]���r8�f����
	9`�+v�p��7e�i�v�����b�{��������
�������:���t&�7�9Sx~#1��r��t:u��%�Iy<�)������=����<��i��d4Y`
�k�#���������|'SSn��X,K��l47�\��\����I�PhF������������t*�(�L�M��L��4�x���9�`����)����$��gv��+\:������V�������k��Z�������mmmJ�R9����w>���fsqc���F���j���Y3�����N�R����5W�^�z���]���X�����l��N{{��LMf4��n�Ler�M�u� M�%��G/^�����i�q��!!���g��}����n�>����d`�h�@L�����q-_����X�����2���.�K���6�96 !�$������`rl@B�
H��	96 !�$������`rl@B�
H��	96Xfws!j``@�G���9e�pX�����N�S�����:�DB������_o��B����m�\����)�Z�	yGG��N�<��,�H���W����z����@@mmm���������2�����t��������$�|>%�IK�������;��X�M��9R�����+��kjj���p8rh���H$R��������O��Y�B������Qy-��|:��� IDAT.�K��lJ����T*���:�N[�u���t:]������$	��i���������������d����N��~I��}������v����-d||\����z
�o���������jvA������E9rd��WN�����/_nw�����������x-���t
%����6������.�7nT4�1����������nC�_�@@N��L�].�ZZZ���v8J�R9�����Ng��B�J�r�$I�.]�9���v�������P2���c�F���]]]����$�_�^�tZ�`�\7
���OW^h[@��N���U�u���W�^�pAk���;�������=/]OOON���a�7���PSS�Y�>y���M��+/�- WWW�e��������+[h&��>!�)#�^���*���{Q7Y�����ZZZ��D-�i�����9�%[C��H��	96 !�$������`rl@B�
H�P>_����I��K�<cw4`+r�GG����K����-8`wD`�ev�E��������]>1aO<P!�!��9~\jj�&�����e*		9���?-��Jo���|�����������B�dskdD����o|�Zv�}�������q@�Y	y0����<�:;;-��@@�P����������^"�Pww����������K�/d������}�����?&�P>!����������[�D���W�����s�u���@ `�>]y�m���>�����7S+~��e	*���x��f���#��D�B!=��C_'��p�$�>�O�H�`y�m��'��{�'���R$B2?�����n�.\�`wL�"K����/*�N�������i����,s8�v���v���p��T�������?�pfJ����e��{��}��	������+�������]��"�p8t���2G8��n����o�^��zH.�K��������)t����������q�����5P�/~��_��?�Y���x�.=�����M�{X�jkkg��tuuI��0N&�S�w��19�N�R���R-���H���`k��M�PB<]�l������c��*���k������IO>i]~�mz��Oj���_��,8FEi �r�`0(��\n�X����\�RR�yC!����%�N��t,/�-��q���qc�d��~K��?�H���D"������v�2/uB�r��p8t��	sY(RSS��L����I�����N���/��������`Q8v,��O�v�2)�~r_s���c��|9����d���#g�4���3�C=���n��~I�f�h��T���v�\jooWoo�$e�1477U>]������n�'3��?X��,j�`P�T�����k�^�z���K���1�_1��cfe=�Q����M��������_���M��D�@@�d2'����Q4��~{{��^o���3�s�_C>S���jii�;�������,�`��I�w�?&K����
���g��lB���=���}���Z��]������!X|���r�;�����G��,��y��gj���O�e��k�d�g��qX�JI�����8]$�=�3`,E��?�$��7����>�����	92#�?��u�/�B�V|�����	9�R�����?v�Z�+��������,��l�HMM���={�o|�d�5�K��Of�����3������	�j������'��6Ig���@�PC�T�?��/[�v��z{�,a$�K�3�d���'k���,��o�?&X�h���}�3RK�5oh�^y�dlB
9�b5:���k�G?������q$QC�8}��)��%�O<!=�4�8���`�9p@��Nid$w�/��
I������A�u�����3��?n-���1�D���.\�����q-_���8f%j``@�G���y�	���������������H$���m�>���+/�-��:s&�_����e]]�g?[���x���Z��mc(�����d2)��3�z���r:������-�����>���)�*U^h[�y��+y��d����}�d*���6�-�>�G�)X�t:�M���������e>�O�H�`y�m���?.utX���#E"���j�Cg�'����a�B��I{:�VMMM�2���t:]����s��W�8�/|�O~R��7���Z���������eF���������3�������y'w���%=����jP�uBTSS����m?>>>��B�����*����<����x�)��	�[�>�9��~���<�2����Br�uB�L&�L&���s���~���O.��\�p8�J�r�K��f����m;����;���v�Z�C�"�a�����C���_�lw�w����%k������W���.�����,�������Q��������t:���'�����.���H���_�t:�`0hn
�����+/�-���������/}I��?�H�,P�pX~�_]]]��`0(��o�����_C����3x���/z�T*e�b�\.������W�2#����OW0c�����m]�ret��G��#	��|J&��������I�$�Z�~}Nk��l�'�G�)z��i������t�2��+��;������$�T���s�Y���;3x��7�?.�C������+#����f
���K�&!_�m�g<�:����7�����������	�8�%'�HH�V�\is$sg������$��|���o��u����Z����T�c�����,��qi	��T��(3x�d���"�qKV P*�*j���d���T�h4�_�/��Z�;�#<X���B�B�����	9�����2��������3M��~{��
����d2�(�q���>���t��u�/�rfJ3���1@uuu�Lw����t:��~%	E�Qs��(_H�����L��������-L`��k���hk�
$���'"=����X���$��%�Q���wW��k2�vK�$���PC0���o3��'OZ�����m����`+j��������?������U�qX�H�������������.�{2���>eO\��@�u��v�r��z��l����f+W�?.@E��`.��r��z�d�wW���I���e�x\��/�;����Q������~.S+~����	PQ�����2���.��"����������H&w������n�g������t�����O|B���H�y-�A������x����S
��������z��N"�Pww����������`��W3��ML�.���L������X|
yGG����<��,�H(����O}}}��|������������������@Q������>%}���d��Q�DH�-�����#S�D�\.9r��}����2��d�`P�Cmmm�2���H$R����`���$�O����emm�d�C*{X��gQ4Y/��K�$e�����jjjr�9�����������.}�c�������_�>����((����_����%Os:�Z�j�UWWW���TB���/��W�����3./�m!�����k�20�*�!*�B>k{{U��/Y����7��c������~0S����������^z�%��a3	w:���"��������n�>���L9��\[2	yOO�jjjr��R(!��|��t�����;���v�Z�C�"�a��\\��b:������e���Z����&���q�u��!]�|Y^�W�����8��ccc��b�SO=%IeI��DB����d2���|2���T*��,�N�OP�+/�-X����L2~�����������\����C���_�lw��������r>uuu��|��|+�t�u���W,��i������;w����]]]J��y����.���H���_�t:�`0h��B!s����m�/}I��6k2��wK���M�`�(&��b�A����������p8�h4���VI�s�=���*555�������[�mN����#�S���7���J&��r�Q�J��Zl�����v���j``@Rf��������+����[<_%��m��OJ7�\����H$��3��������t��Q


I��o�to�+�$�/^��
$e>�d2�}����r���+:w�\�	�t���^���������t�*/�-X���L2��V|9~���'�(L�9q��	����W]]�������c����*h���Kc�u�����O+���Sr:�9�	����y�appP---���@<�T�����K����L���/���\8Voo��N����o)�n�<�:�H����q�$���3���7[m

����eO�m�C����+V���W�dRw�u��L��H$R�������I9���'�_�uk2��K��8����.
N9=uGG�<��������T*e�V*�����^�@@�O����[��T*��+W��ug��Q�����D"�S3~�
7h���E��0+�D�V��)k�o����/�?&XB���|�{��;;;�e>����T��oW*�R$QSS�YA{��Q�����������O�'�r������n���M�����d���Z��+��U�����q9��e����d��-F]]]Nro�e��������s��	�$���k�d���s]]��>g]�~����H�6�?&@A�mV>66�����-[�n���-�H���KJ�R�2M�]�|Y###�Tds���L���������m+V�?.@Q.]�4�mc��>����������;!?s��B���{4���p8�y��r���P(3���oZ����)L�)���*�N�,3������/����j�����i��[Y����k��5:{��$�C�V�\��n`�=���;�c]~�M�Z�;�(L�iy�^���*���M�
����4�����PSSS�$�R������&^����K��+�Z��>k����d�~��q$e�=K&���~�?g��}������li��x���T�G�PH��o��A�m��H���POM9��W^�$������G��(+#�����R_���3������a8p@6l��o����y�m�������C.e��#!3��/����g6S+�k�V���
�B���!�������[n)wHZ���|����4*����i��5��u���5�����I�'��53������	`�H$���z�����&����[��n�.\�P�����N����X���)���/�e�wK��?&@E��a�����n����v�f����hll�������Ojj�&��\�i�N2K^SS�N�>]Qy�m���={V��y�Y�n��O-��g����o]��f��{<��	Pq�\�"���={�h��������i���z{{-�>����`zo��E�'S�����5S3����k������+�b����o��v�#�)!��ys���+W�,�������x����S�H$���m��o�b�������Y��?�$��.Y�����z��1*��|�o������r�YS���9�Ny�h�����S�������Zw��R�f�s���������L��m��?&f��A�$)�������;%I�pX�|���q���)k����GNR����DJ^w��R�f��13x�d����H�d`���RWW���`Q���a=zT;w�T8.C�6���������r:�Z�z������`n{�������9�AO������Y�p8�����t������S�)�	k��}���^�cT����_/�������p���FN��\gddDJ&�����z577����%F[��^zIN�S�<��������NMMM�F��6�]mm���;]y)��8tH��'����2��77�?&@�3�J���K�pX/^T2�4�	�����;��x��(����N�a�%���Z������{|||�������L��l_����v�p�b�t.^����<���5K��M�t�s��?�^-q~c�.s�����+$�^o�j�KaKB�p8�i�����)������~R�T��t:��D��u�+/e?�Z'ua��i,k���;p.rV�����X,3��w�c-{�a�������
m��\����2�[ss�._����.�����G����J�6m�4'�Y�~����9h�$�B!sD���.�����t���%��&k2~�u��c��?oKX�5[j�%i���z����L&�N�522������CuuuE�NGGG��i~�����r���]��������'}�R��Y�]h�B����"=����/Y�o����l�����<�-!����r��R9rd����
��a������
�+���R�$� $��&������c����`q+�������b1�t�M�����O���u����TKK����k_�$���=rDjo/L�E���CJ�R�z�����OY�'�x��-
M��������������E�����}'��&��7�?&��t�]w��W^Q8��'T__��[�j��M����5!oii����%Ik���r��+W�+$`����L����n-������7�P���Vcc�%e�0�={V�`Pr��jhh(�8`eM���\%46x��L2���������{�`I1��B
�������
�jhh�}��W�Z��$�ccc����}��
>�y�S��._�:S+���}�c�&�H�������p�{�b��^y�
+�N���^---����u��9E"�?~�$�'O�����d��C��_�������_�����L�����Uww�ZZZ���]]]������&���7g�r8jhh������z������hY���7O�g|2���x������������~������
P1jkk�������7����TCC�9�������\.�\.%	����fG��b��n�����Hl���z�	k���L�V��{��b�|>���j���:s���������e_�<���sf����~m�v466��\����������z���566V���<����B�d���"�q����������n�B!������B��>���~;g�[o��C���~�sm</��b��$���u�u�2c]���<�]}��.�8����7S���+`������d2���>������W===���H$���&K�����[�d�ly�1�{<��n��,;}����z�����^�Zo��V"s���_������W-e=��H����AI�����]�ht^�7111m���y��dn���W�&��]w�R�T��._�\�h�\{`��>��������7���7�_�76��)2@%���U:�V86��={v�v[�~�"�H��������O���J�lh�U�A�$���N�8�`0hi?Y(����>����+,0K?���:C���o[�^\�+������-�-`��z��x��z{{���k.��A�Z[[��k�Y�'I---��V�o���Fm��EJ&���q������z��W544$��9�(���|������A��p�R������~����E[[[Y�����b����;')������6�WYW���K���:}�t�~�G��w_9C3���1��3hY��7��g�^�n�%�`z���S	\��cmmm��}���?��/J�V�Z����}�?�����P������?���Y�U=����7��K$z����N�-e�7o.{NjK����:��������{0CF��3t@7�������z��w���b1>|x���o���2GdSB^N===9��[ZZ�T.(
������<4�m9��=��o~�"�� IDAT���7�g��Y����(�/� ���|�bZg/��<���KR8Voo���_o9�`P�PH������R0Too�V�\i�>�r������?�,���K���W{c�
QP���	m��������Cn������+WN�^4���3Nss����93'�,D����|�������t�~��/�����H$���1�C1������_�����������4e������&�����uvv���;'i��J���xr����(�J�I9���o�3t@����,�����g�^���Wl����|�������C��t:-���o/�\�eI��y�UWW����n�������C~�_N���9����Og||\���9]��\�p���C���o��~g����/�k�l��7����u9Q)8��S[[���Z��(��������$)�LZ���Y�8�5k����W/^��+�Z�|��iJ0���`Pf��@  ��o�^H:��6��m�t�I=��i
b��k��89+F�������:C��7�c)������gV���X)*�\�o������~�C�(KB��zu������s�r�J�f�����
����b&�mmmJ&�:s��%!������g7C�m9��{��:C�3?������-?��6D����bjll�;�������h��@  I%5����E�Q��=�H(�L�Ir"�Pww�����t:
�����������l��d���G����eY�f�����W���?�`n=zTCCC�~:Mu ����%w�e���D<���?���L��~��~��~s���3?�K�.I�����z�J�r���o_N��)��x��:�M�!K��u�N_��GW��
"`n�AE"���ipp�\�y�f=���K'!�2���Zd����]w�5�M�9��9{��<�� �~*�-��|�{Qu�h���R���7���>�`~D�Q�{����|f-�$�\.�^�Z�pX^���1���'	?~\�W�Vss�V�X���/kppP�����}t�D"�h4:m��bu��9��8�������z|{t�}�lCT��+����|�r�#��%!�F���O|�9�wcc������_~y���\.�q�����������������nU�����~����_N�S�N�����A���j��ue������w�����-��uuuZ�zuY�)`������:C����N;z|�G�G�m�
����}�8���]�rE�?������N�����>�dSB�p8422���1��N�RZ�~�a�����������7������Z=�����gCt�O]]������^zIo���&&&�z�jm����}�
�$��6mR0��C��e��������4sw0�����~[M�}E���r���
�����j~���`�D"!�����VK�]��������i���z��'�9��L�y[[��O�b���q��'�V���n��������AK��c�t�wXj������Oh��}e�E���q-_���;�2�9r�������������f���$������.��F����
�q�#`a���3�/]�T��<��?/s���p����t*$��a�����W���o�u�|��]|�ZE��aE��a�z��1BJ500`���x��9w������Lf4���Uoo�e��������v���mi�
����IS�L�s�����G�Zn&���~X�7���H�#��)��l�"���H$"��������U�V���q����{��'���|�������y����/����Z���/
���}^����$IW�\����ZQ��m���$[�@PI���/9��N�Y��u�y��������nD,,�DB�tZ���9���kp�]�v��k�V��D"���~[���:y��$���MR���$577�+l��\2��{��m]��w����7f5E�U�X}�<G
��kmm��������;��?�R�����$rR'��l�r��`fR�E����^f �4(�d��S��8R�s0�bh�e0D^j&�:�e�8�k�����NY�v.NS��y�����)mR�"�@�&���~\k�e�|�L&��_�J
���9�����a2�������?����M�g��Q.��{<�!��/�0��x���][���:sF�o�?
~pV��>��	X���=�:C�h���^k����*
My��_]�ahpp�v?�{�����><x�)���M����V>�����L�������WU.�e�
��&''�D��s��7�������������:N��}G�^�{�]����RpCuX�f-7v��E�QE"���G�*knnN���/���+��{����fff��z���J�$UB�;���={�lvY8�����?����>}�Qk��C�����&	@����:}��5R����
�BM{�]�vY��n��T�����O�.]��y�6-�g�Y�!��k
Q��w���o��X�����R�k�������������&	@�W,S,�T	��\������555�p8���������k�n��%�0lC�7��r���k���0��*����W����O����KhM}���;������������5��.Os�`�h���^|�E����V�T��/�h�2-I�PhS��/�i�<[!�:D�g?��.]�d�S���g��7R"������4��T�6�\<�}1�t?`�z���>5U�yG�H?�i��o�V��_�4�F�������[���_*��k������ve�c��I��!===�u�������,{�H$488x�I����r������&''���k=v���|�����O�;o�����K����jj,s�C��@�4�F"���N6��M��B��H$b��F���t����-������k�����.����Tz������+�|�T�W�N U�
Xi��r��l6?���r�����T;!����W8�������Z[r��'&&$I�RI����f�u
�����N��e~~^���>�czz����=]�����?h�oh�oh{�����O_����s��������B[D��-��������.cKj�@^����~�m������f�v�-�����.�%����h���~{���������^���w�+i��bh��[
m��t����7�v���e���yu��z�]�k��;��$_|a�~�j}�m�f��?���{�1=��C�=�����:�n���W.�S>�W8V>��az��W$-a��������r�p�lV�iZC���4�����k����'�����kW�j����X[�p8���MNNZ���qk����YIRoo�����	��������aG�
��k�!��������C��`������R%(�������B���W�}�����]��
��������^�Gmn�uj�@��R��B��t:�v)��LM-������`p1���b��p�cy  �@���+{/���nm/�3�4�F���@h/�gE�����Y�w�n^�M�U,����v�N�����X����h1�=�����X��OW0��������'��{�o�����{����=��:�,Z�Y4��s���7�?�������4�>���q������?7Y���^�jh!r��MO�gE_X����>���{�6�N�D ������{����{n1�?�|���"������{�o����={���6�N�-�@X���{1�������3�P�R����Qy<MLL�]��!��������?��O~��n^��m�<yR~�_�r��R6�:U�d������{�Q�����5�N�����$)
)���\��"�@'���Z���������G?j^}���J%e2������n��������2�rV���J�w��]�^��on��#�<yR�HD�@�@���M+t�m
~pV?��n�'>��}B�m��W$�x�lV�rY���n����	��dR�BACCC�F�+�J�l�,��q������f��s[�W�VB���z��'����������	��a2MS�X��|,��,��<���0y<�U��f���rS P6�����z{{��N�@#=v��BW�j��Y
^y[;��9!�#��{���on���=�f*�=k3�N�����u���U�)
����F���t������g?~��?8+�\����{���?��J?v��`=�>�'�I��~���5y�\V(�=��z�u�����z����0��+g����u��M�N=�s����C���}�zm� Z[4]���v���<���P((�No�����7b?,��Y��P��u��?�}�U����s*zNw�mob�h����N�������7Ms�P�t�Z���5??��s�Z����.-�����0��+o�����>����P�������<�v�VA[D��-��������.cKj�@^*�d��2��2���|&�Q�PX6�����i���j��;��^�����.�%�]Z�->z�Z��lg�u�v]�������=�/�[�k�[Ek�E7�[m����m ���'	E"���R����Q��q��~�r9
K���g��u���:�>����?��W�������;��
���6��cvvV�����p8�r�l[oll��v�@g��s[O��T~���
����������J�7}v��M�n��@�t=���)�B!kY����-�t?��r��9=5w/t_�������\��6!���;�P-ZMG�Z�R���:�6-,�9CO^���{n��?y�>�?t���:w�9]��o`��*���������c��t�9<��
+xW{����������{������W&d���}
�[I�X����� �>�HO���9�����3������.���r�_�T~��c
�[]0��;v�:��������v�{���G����A]>������[�jP�hgr��0���+?��+�?��0���l_���]	�����D
@�"�����zY��������.{g�[owu���=����
9����G����������>{�����
(�?9��q������=7���W<er�����
(�9w��.��������^�t�������G����W�v;wh-��
��������U��X�u�����=u��k
F �87n�<��g��i����]���iL���6fffy�����=thy�p~]��������K����������+�z{���hqr�>������.^t~��[���}������f��d��CCC�F�.V���@'����{����_��k��#�8�.�N&�Q:����>�@ �re��@��O?]���w~]��>������
�S
���F��d4;;K ������}����>�����'�p~]��R�$I���u������<�H�4MI����������R)�r9�q<W8n�~�a���K��?/������w�����v5�n`'O�T(���q��y*��������%U�y2������c���r�������f599���^������MO/�W�8���O.������S*�R�\������l���� ^588(�0V<�P((�X�9�*�����3
������+�z�������|i�g����oL��������w���K��ey<�U��B!�s^�W�r�!��+W����i������#G�`��u�&H&�2�#���A�<���P(hll��sV�������y�7b	*���������];/^��K����K�y��v������������7�<S�}o������F}ly����"Zmh��'�J��G�TR�P�$�b1�y���1C�;&�ONNjhhh]��M�\3T;����6��2�v����u=5g�������3:t���+���Yq�������!�w��j�+p�H�-�[m���/tl�xUG�X,�H$�h4��1^������v���h/O^�	�s3z��!�W
����-��?���ZN[�R����Q


��������~�r��5\6��i��yN�ck���M=u��������w�]����/-v����C�)@��*����v���x��W%I�LF�L�z��>���������pX�r�v�B����������=^��^�^r���g�_��_��~�����Ek�+���+�7^kjj������w��a�_o�z���;o���u���S�{��|�����_���?�O�A����5� �����Tg���I��#7�Y���a�}7>v|���������>�r^0����������g��?�a�q���m��k{�o>��A��T���^y�_���]]����C�{�Z��&U	����������=Y��]	����lRu@ G�����+o��z?���`������;�se[�m����f�n����vt"9. ��9. ��9. ��9. ��9.�r��v���m�����q�����9��I$
�BJ��J��*��J&�n�-��s�����L������\$Q�Pp�*h]�(yC����������|��R��FI���Q�����u��R:-G1�[����::~~^���
�����v	-azz��:m�v�*h���VA[�-������m�������X,�����:�4��'�4m�U�_�
_�Q�ZM3r�VP,����`��R�C8�$�R)��\.���A�*����9*Z?����500�v][Z�T�����8
�f�uZ�Z��	�l����Y�p��p��p��p��p�t�]��X������g�Ye2���PH###�V:Cm;�����J����l�"�����7�:t�D"!�4566�@ `=���d2�B�`�[z>P�R����Q�����%���2�z�N���������x411a=�-����%�I


)�J���T*e9K?��fX��u6�U.�S8��[��S��j�������\���>S��
���~S��FGG��r��I�K�j�f�����dR�r�j_�dR�DBk�������q)�"ZC������X��K�^�r��bE@E.������e�D�Q��i������s��ihh�z\�23��6�<t���q+���~��i�/
:~������_�i����k����W_U$Y1�K�E�z��vL����_�\�pbz~�l�lV�i.�n���������i���|�c=�-D���e�^y�I�KI�/���FLOO��o~~~S�E{1C�a�n����-����V�����
	�F���'N0�M����Dl�
�z"���'���[����������j�����������lhp?����Q?�LF�^�x�"���u��R�d�C���^zI�iZ������e���;��q�H�6a�Y��9�m?p"�N[?�N���'�<~������!�����������9�Bu�����U�Z���,�O�V(�����;���x�
�4MS���.U�v�D�/x���������/�����z4��x��UC4m��@�-/��[�h���J$���N�R(����a��L������-m��L�I��)������{,S��<O�_n��.�X(l�@��~�>}�z�������[���X��}���*�-,,,LOOk``��Z�
Y�NdUu��k?�9��:4s��������h������d�]4�����R��D���\:��Z�����gmnX��	�l�����X,��������b���
��z{�y������nE�k�IDAT��@����@����@����@����@��l1�dR�XL�XL�|~���J�
����hg�lV�\N��D"�H$�h4�����y
���)�yl"��i������_7��*��(�+o�.�~���D�i�����|�
_offF�g�0^����J���O.�[�W���w�u�$�)��6���z�h�r�,��#������N�<)��������^�jU�ht�=�`��4A�T�����8�Y�����������d�����x��]���a(��]k>�����m|���P(d���mll��o���j�
�LF�L�:�u��t�u�n�@��iy<


)�NkhhH�G�tz�0�J���d466f
3/��:q���Jw$���4�K��$9�����x<n�a������j(O���D"V��F�V�����a~rr�znlll�5�$r��������/u��9E"������a���~��ejjJ�P���A$Q�P�����9v��$�Ygu�|������I�@�,�g��~���5MS��������&''5??����h(7C�i����k�:��B��&''599�pu@�#���d��Q���Z����p�F��>���:�%�D"nPU��=��DB�X�vo:��!�4����������W
���<33c{�:����u�n8��������k�z�2c���cbbBCCC����I�4�z� �D"��r�e�N�:�P(��������P((�J��?q�D]�����K2���x6�]�l���~�k���~:C�h���A^"^;�}#C����z{{5::�\.g=?44�p8�l����*3�W�/�]~�^�HD�LF�\��L&m��s9��m{������500�v-����i��}�z�5==�bIt�`0hms9. ��9. ��9. ��9. ��9. ��9. ��9. ��9. ��9. ��9. ��9. ����b����n�����Ek{�������������m.���b��`0�v[��3����9�����3����9�����3�������D��pz���IEND�B`�
pg_conn_pool_notes_3.mdapplication/octet-stream; name=pg_conn_pool_notes_3.mdDownload
#25Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#24)
1 attachment(s)
Re: Built-in connection pooler

Hello Ryan,

Thank you very much for review and benchmarking.
My answers are inside.

On 25.07.2019 0:58, Ryan Lambert wrote:

Applying the patch [1] has improved from v9, still getting these:

Fixed.

 used a DigitalOcean droplet with 2 CPU and 2 GB RAM and SSD for this
testing, Ubuntu 18.04.  I chose the smaller server size based on the
availability of similar and recent results around connection pooling
[2] that used AWS EC2 m4.large instance (2 cores, 8 GB RAM) and
pgbouncer.  Your prior pgbench tests [3] also focused on larger
servers so I wanted to see how this works on smaller hardware.

Considering this from connpool.sgml:
"<varname>connection_proxies</varname> specifies number of connection
proxy processes which will be spawned. Default value is zero, so
connection pooling is disabled by default."

That hints to me that connection_proxies is the main configuration to
start with so that was the only configuration I changed from the
default for this feature.  I adjusted shared_buffers to 500MB (25% of
total) and max_connections to 1000.  Only having one proxy gives
subpar performance across the board, so did setting this value to 10. 
My hunch is this value should roughly follow the # of cpus available,
but that's just a hunch.

I do not think that number of proxies should depend on number of CPUs.
Proxy process is not performing any computations, it is just redirecting
data from client to backend and visa versa.
Certainly starting  form some number of connections is becomes
bottleneck. The same is true for pgbouncer: you need to start several
pgbouncer instances to be able to utilize all resources and provide best
performance at
computer with large number of cores. The optimal value greatly depends
on number on workload, it is difficult to suggest some formula which
allows to calculate optimal number of proxies for each configuration.

I don't understand yet how max_sessions ties in.
Also, having both session_pool_size and connection_proxies seemed
confusing at first.  I still haven't figured out exactly how they
relate together in the overall operation and their impact on performance.

"max_sessions" is mostly technical parameter. To listen client
connections I need to initialize WaitEvent set specdify maximal number
of events.
It should not somehow affect performance. So just  specifying large
enough value should work in most cases.
But I do not want to hardcode some constants and that it why I add GUC
variable.

"connections_proxies" is used mostly to toggle connection pooling.
Using more than 1 proxy is be needed only for huge workloads (hundreds
connections).

And "session_pool_size" is core parameter  which determine efficiency of
pooling.
The main trouble with it now, is that it is per database/user
combination. Each such combination will have its own connection pool.
Choosing optimal value of pooler backends is non-trivial task. It
certainly depends on number of available CPU cores.
But if backends and mostly disk-bounded, then optimal number of pooler
worker can be large than number of cores.
Presence of several pools make this choice even more complicated.

The new view helped, I get the concept of **what** it is doing
(connection_proxies = more rows, session_pool_size = n_backends for
each row), it's more a lack of understanding the **why** regarding how
it will operate.

postgres=# select * from pg_pooler_state();
 pid  | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | tx_bytes  | rx_bytes  | n_transactions
------+-----------+---------------+---------+------------+----------------------+-----------+-----------+----------------
 1682 |        75 |             0 |       1 |         10 |            
 0 | 366810458 | 353181393 |        5557109
 1683 |        75 |             0 |       1 |         10 |            
 0 | 368464689 | 354778709 |        5582174
(2 rows

I am not sure how I feel about this:
"Non-tainted backends are not terminated even if there are no more
connected sessions."

PgPRO EE version of connection pooler has "idle_pool_worker_timeout"
parameter which allows to terminate idle workers.
It is possible to implement it also for vanilla version of pooler. But
primary intention of this patch was to minimize changes in Postgres core

Would it be possible (eventually) to monitor connection rates and free
up non-tainted backends after a time?  The way I'd like to think of
that working would be:

If 50% of backends are unused for more than 1 hour, release 10% of
established backends.

The two percentages and time frame would ideally be configurable, but
setup in a way that it doesn't let go of connections too quickly,
causing unnecessary expense of re-establishing those connections.  My
thought is if there's one big surge of connections followed by a long
period of lower connections, does it make sense to keep those extra
backends established?

I think that idle timeout is enough but more complicated logic can also
be implemented.

I'll give the documentation another pass soon.  Thanks for all your
work on this, I like what I'm seeing so far!

Thank you very much.
I attached new version of the patch with fixed indentation problems and
Win32 specific fixes.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-11.patchtext/x-patch; name=builtin_connection_proxy-11.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a3..f8b93f1 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..8486ce1
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients such model can cause consumption of large number of system
+    resources and lead to significant performance degradation, especially at computers with large
+    number of CPU cores. The reason is high contention between backends for postgres resources.
+    Also size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for this data structures.
+  </para>
+
+  <para>
+    This is why most of production Postgres installation are using some kind of connection pooling:
+    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions and bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If number of backends is too small, then server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    As far as pooled backends are not terminated on client exist, it will not
+    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, built-in connection pooler doesn't require installation and configuration of some other components.
+    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
+    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
+    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. And such backend can not be rescheduled for some another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f112..5b19fef 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..029f0dc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &connpool;
 
  </part>
 
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7..acbaed3 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a..10a14d0 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..de787ba
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad43..57d856f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..f5de1f0
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1078 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..23e9706 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,19 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +145,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +593,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +700,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +731,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +770,30 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +804,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +847,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +887,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +897,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +960,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +991,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1008,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1415,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1..62ec2af 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970..16ca58d 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..79001cc 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee..47b3845 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,42 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2239,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2318,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4635,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4821,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4911,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5085,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5101,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5127,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5488,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5783,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5807,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6141,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6204,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6838,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6880,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6942,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +6989,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7564,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7770,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7827,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8108,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8166,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
@@ -8175,7 +8250,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8272,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8290,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8444,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8453,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8535,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8563,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8641,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9354,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9508,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9736,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9971,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10027,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10043,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10399,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10458,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10538,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10679,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10702,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10761,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11197,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11210,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11248,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11278,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11566,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..5f528c1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 96415a9..6d1a926 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..86c0ef8 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2af..05906e9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..7f7a92a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..36312d4 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -149,6 +151,8 @@ typedef struct WaitEvent
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +181,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c0b8e3f..24569d8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 973691c..bcbfec3 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#26Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#25)
1 attachment(s)
Re: Built-in connection pooler

I attached new version of the patch with fixed indentation problems and
Win32 specific fixes.

Great, this latest patch applies cleanly to master. installcheck world
still passes.

"connections_proxies" is used mostly to toggle connection pooling.
Using more than 1 proxy is be needed only for huge workloads (hundreds
connections).

My testing showed using only one proxy performing very poorly vs not using
the pooler, even at 300 connections, with -3% TPS. At lower numbers of
connections it was much worse than other configurations I tried. I just
shared my full pgbench results [1]https://docs.google.com/spreadsheets/d/11XFoR26eiPQETUIlLGY5idG3fzJKEhuAjuKp6RVECOU, the "No Pool" and "# Proxies 2" data is
what I used to generate the charts I previously shared. The 1 proxy and 10
proxy data I had referred to but hadn't shared the results, sorry about
that.

And "session_pool_size" is core parameter which determine efficiency of
pooling.
The main trouble with it now, is that it is per database/user
combination. Each such combination will have its own connection pool.
Choosing optimal value of pooler backends is non-trivial task. It
certainly depends on number of available CPU cores.
But if backends and mostly disk-bounded, then optimal number of pooler
worker can be large than number of cores.

I will do more testing around this variable next. It seems that increasing
session_pool_size for connection_proxies = 1 might help and leaving it at
its default was my problem.

PgPRO EE version of connection pooler has "idle_pool_worker_timeout"
parameter which allows to terminate idle workers.

+1

It is possible to implement it also for vanilla version of pooler. But
primary intention of this patch was to minimize changes in Postgres core

Understood.

I attached a patch to apply after your latest patch [2]/messages/by-id/attachment/102848/builtin_connection_proxy-11.patch with my suggested
changes to the docs. I tried to make things read smoother without altering
your meaning. I don't think the connection pooler chapter fits in The SQL
Language section, it seems more like Server Admin functionality so I moved
it to follow the chapter on HA, load balancing and replication. That made
more sense to me looking at the overall ToC of the docs.

Thanks,

[1]: https://docs.google.com/spreadsheets/d/11XFoR26eiPQETUIlLGY5idG3fzJKEhuAjuKp6RVECOU
https://docs.google.com/spreadsheets/d/11XFoR26eiPQETUIlLGY5idG3fzJKEhuAjuKp6RVECOU
[2]: /messages/by-id/attachment/102848/builtin_connection_proxy-11.patch
/messages/by-id/attachment/102848/builtin_connection_proxy-11.patch

*Ryan*

Show quoted text

Attachments:

builtin_connection_proxy-docs-1.patchapplication/octet-stream; name=builtin_connection_proxy-docs-1.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fdbbc0abdf..2d82cddc7e 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -728,7 +728,7 @@ include_dir 'conf.d'
       <listitem>
         <para>
           The maximum number of client sessions that can be handled by
-          one connection proxy when session pooling is switched on.
+          one connection proxy when session pooling is enabled.
           This parameter does not add any memory or CPU overhead, so
           specifying a large <varname>max_sessions</varname> value
           does not affect performance.
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
index 8486ce1e8d..a4b27209ef 100644
--- a/doc/src/sgml/connpool.sgml
+++ b/doc/src/sgml/connpool.sgml
@@ -9,22 +9,22 @@
 
   <para>
     <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
-    For large number of clients such model can cause consumption of large number of system
-    resources and lead to significant performance degradation, especially at computers with large
-    number of CPU cores. The reason is high contention between backends for postgres resources.
-    Also size of many Postgres internal data structures are proportional to the number of
-    active backends as well as complexity of algorithms for this data structures.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
   </para>
 
   <para>
-    This is why most of production Postgres installation are using some kind of connection pooling:
-    pgbouncer, J2EE, odyssey,... But external connection pooler requires additional efforts for installation,
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
     configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
-    single-threaded and so can be bottleneck for highload system, so multiple instances of pgbouncer have to be launched.
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
   </para>
 
   <para>
-    Starting from version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
     This chapter describes architecture and usage of built-in connection pooler.
   </para>
 
@@ -58,8 +58,8 @@
   </para>
 
   <para>
-    Built-in connection pooler is accepted connections on separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
-    If client is connected to postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
     with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
     launch new worker backends. It means that to enable connection pooler Postgres should be configured
     to accept local connections (<literal>pg_hba.conf</literal> file).
@@ -73,8 +73,8 @@
 
   <para>
     Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
-    Right now sessions and bounded to proxy and can not migrate between them.
-    To provide uniform load balancing of proxies, postmaster is using one of three scheduling policies:
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
     <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
     In the last case postmaster will choose proxy with smallest number of already attached clients, with
     extra weight added to SSL connections (which consume more CPU).
@@ -92,14 +92,14 @@
   </para>
 
   <para>
-    <varname>connection_proxies</varname> specifies number of connection proxy processes which will be spawned.
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
     Default value is zero, so connection pooling is disabled by default.
   </para>
 
   <para>
-    <varname>session_pool_size</varname> specifies maximal number of backends per connection pool. Maximal number of laucnhed non-dedicated backends in pooling mode is limited by
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
     <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
-    If number of backends is too small, then server will not be able to utilize all system resources.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
     But too large value can cause degradation of performance because of large snapshots and lock contention.
   </para>
 
@@ -112,7 +112,7 @@
   <para>
     Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
     Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
-    But it is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
     It is needed for connection pooler itself to launch worker backends.
   </para>
 
@@ -129,8 +129,8 @@
   </para>
 
   <para>
-    As far as pooled backends are not terminated on client exist, it will not
-    be possible to drop database to which them are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
   </para>
 
  </sect1>
@@ -139,8 +139,8 @@
   <title>Built-in Connection Pooler Pros and Cons</title>
 
   <para>
-    Unlike pgbouncer and other external connection poolers, built-in connection pooler doesn't require installation and configuration of some other components.
-    Also it doesn't introduce any limitations for clients: existed clients can work through proxy and don't notice any difference.
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
     If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
     connection pooling but it will correctly work. This is the main difference with pgbouncer,
     which may cause incorrect behavior of client application in case of using other session level pooling policy.
@@ -156,16 +156,15 @@
   </para>
 
   <para>
-    Redirecting connections through connection proxy definitely have negative effect on total system performance and especially latency.
-    Overhead of connection proxing depends on too many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
-    Pgbench benchmark in select-only mode shows almost two times worser performance for local connections through connection pooler comparing with direct local connections when
-    number of connections is small enough (10). For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
   </para>
 
   <para>
     Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
     other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
-    state for long enough time. And such backend can not be rescheduled for some another session.
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
     The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
   </para>
 
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 029f0dc4e3..ee6e2bdeb6 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,7 +109,6 @@
   &mvcc;
   &perform;
   &parallel;
-  &connpool;
 
  </part>
 
@@ -159,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
#27Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#26)
1 attachment(s)
Re: Built-in connection pooler

I attached a patch to apply after your latest patch [2] with my
suggested changes to the docs.  I tried to make things read smoother
without altering your meaning.  I don't think the connection pooler
chapter fits in The SQL Language section, it seems more like Server
Admin functionality so I moved it to follow the chapter on HA, load
balancing and replication.  That made more sense to me looking at the
overall ToC of the docs.

Thank you.
I have committed your documentation changes in my Git repository and
attach new patch with your corrections.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-12.patchtext/x-patch; name=builtin_connection_proxy-12.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a3..2758506 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..a4b2720
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,173 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f112..5b19fef 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7..acbaed3 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a..10a14d0 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..de787ba
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[])
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (!conn || PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		return NULL;
+	}
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad43..57d856f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..f5de1f0
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1078 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+//#define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..23e9706 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,19 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -137,9 +145,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -585,6 +593,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -691,9 +700,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +731,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +770,30 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +804,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +847,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +887,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,21 +897,39 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -897,9 +960,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +991,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1008,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1336,7 +1415,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1..62ec2af 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970..16ca58d 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..79001cc 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee..47b3845 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,42 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2239,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2318,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4635,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4821,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4911,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5085,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5101,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5127,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5488,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5783,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5807,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6141,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6204,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6838,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6880,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6942,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +6989,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7564,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7770,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7827,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8108,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8166,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
@@ -8175,7 +8250,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8272,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8290,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8444,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8453,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8535,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8563,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8641,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9354,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9508,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9736,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9971,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10027,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10043,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10399,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10458,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10538,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10679,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10702,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10761,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11197,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11210,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11248,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11278,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11566,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..5f528c1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 96415a9..6d1a926 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..86c0ef8 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2af..05906e9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..7f7a92a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..36312d4 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -149,6 +151,8 @@ typedef struct WaitEvent
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +181,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c0b8e3f..24569d8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 973691c..bcbfec3 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#28Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#25)
Re: Built-in connection pooler

Hi Konstantin,

I've started reviewing this patch and experimenting with it, so let me
share some initial thoughts.

1) not handling session state (yet)

I understand handling session state would mean additional complexity, so
I'm OK with not having it in v1. That being said, I think this is the
primary issue with connection pooling on PostgreSQL - configuring and
running a separate pool is not free, of course, but when people complain
to us it's when they can't actually use a connection pool because of
this limitation.

So what are your plans regarding this feature? I think you mentioned
you already have the code in another product. Do you plan to submit it
in the pg13 cycle, or what's the plan? I'm willing to put some effort
into reviewing and testing that.

FWIW it'd be nice to expose it as some sort of interface, so that other
connection pools can leverage it too. There are use cases that don't
work with a built-in connection pool (say, PAUSE/RESUME in pgbouncer
allows restarting the database) so projects like pgbouncer or odyssey
are unlikely to disappear anytime soon.

I also wonder if we could make it more permissive even in v1, without
implementing dump/restore of session state.

Consider for example patterns like this:

BEGIN;
SET LOCAL enable_nestloop = off;
...
COMMIT;

or

PREPARE x(int) AS SELECT ...;
EXECUTE x(1);
EXECUTE x(2);
...
EXECUTE x(100000);
DEALLOCATE x;

or perhaps even

CREATE FUNCTION f() AS $$ ... $$
LANGUAGE sql
SET enable_nestloop = off;

In all those cases (and I'm sure there are other similar examples) the
connection pool considers the session 'tainted' it marks it as tainted
and we never reset that. So even when an application tries to play nice,
it can't use pooling.

Would it be possible to maybe track this with more detail (number of
prepared statements, ignore SET LOCAL, ...)? That should allow us to do
pooling even without full support for restoring session state.

2) configuration

I think we need to rethink how the pool is configured. The options
available at the moment are more a consequence of the implementation and
are rather cumbersome to use in some cases.

For example, we have session_pool_size, which is (essentially) the
number of backends kept in the pool. Which seems fine at first, because
it seems like you might say

max_connections = 100
session_pool_size = 50

to say the connection pool will only ever use 50 connections, leaving
the rest for "direct" connection. But that does not work at all, because
the number of backends the pool can open is

session_pool_size * connection_proxies * databases * roles

which pretty much means there's no limit, because while we can specify
the number of proxies, the number of databases and roles is arbitrary.
And there's no way to restrict which dbs/roles can use the pool.

So you can happily do this

max_connections = 100
connection_proxies = 4
session_pool_size = 10

pgbench -c 24 -U user1 test1
pgbench -c 24 -U user2 test2
pgbench -c 24 -U user3 test3
pgbench -c 24 -U user4 test4

at which point it's pretty much game over, because each proxy has 4
pools, each with ~6 backends, 96 backends in total. And because
non-tainted connections are never closed, no other users/dbs can use the
pool (will just wait indefinitely).

To allow practical configurations, I think we need to be able to define:

* which users/dbs can use the connection pool
* minimum/maximum pool size per user, per db and per user/db
* maximum number of backend connections

We need to be able to close connections when needed (when not assigned,
and we need the connection for someone else).

Plus those limits need to be global, not "per proxy" - it's just strange
that increasing connection_proxies bumps up the effective pool size.

I don't know what's the best way to specify this configuration - whether
to store it in a separate file, in some system catalog, or what.

3) monitoring

I think we need much better monitoring capabilities. At this point we
have a single system catalog (well, a SRF) giving us proxy-level
summary. But I think we need much more detailed overview - probably
something like pgbouncer has - listing of client/backend sessions, with
various details.

Of course, that's difficult to do when those lists are stored in private
memory of each proxy process - I think we need to move this to shared
memory, which would also help to address some of the issues I mentioned
in the previous section (particularly that the limits need to be global,
not per proxy).

4) restart_pooler_on_reload

I find it quite strange that restart_pooler_on_reload binds restart of
the connection pool to reload of the configuration file. That seems like
a rather surprising behavior, and I don't see why would you ever want
that? Currently it seems like the only way to force the proxies to close
the connections (the docs mention DROP DATABASE), but why shouldn't we
have separate functions to do that? In particular, why would you want to
close connections for all databases and not just for the one you're
trying to drop?

5) session_schedule

It's nice we support different strategies to assign connections to
worker processes, but how do you tune it? How do you pick the right
option for your workload? We either need to provide metrics to allow
informed decision, or just not provide the option.

And "load average" may be a bit misleading term (as used in the section
about load-balancing option). It kinda suggests we're measuring how busy
the different proxies were recently (that's what load average in Unix
does) - by counting active processes, CPU usage or whatever. But AFAICS
that's not what's happening at all - it just counts the connections,
with SSL connections counted as more expensive.

6) issues during testin

While testing, I've seen a couple of issues. Firstly, after specifying a
db that does not exist:

psql -h localhost -p 6543 xyz

just hangs and waits forever. In the server log I see this:

2019-07-25 23:16:50.229 CEST [31296] FATAL: database "xyz" does not exist
2019-07-25 23:16:50.258 CEST [31251] WARNING: could not setup local connect to server
2019-07-25 23:16:50.258 CEST [31251] DETAIL: FATAL: database "xyz" does not exist

But the client somehow does not get the message and waits.

Secondly, when trying this

pgbench -p 5432 -U x -i -s 1 test
pgbench -p 6543 -U x -c 24 -C -T 10 test

it very quickly locks up, with plenty of non-granted locks in pg_locks,
but I don't see any interventions by deadlock detector so I presume
the issue is somewhere else. I don't see any such issues whe running
without the connection pool or without the -C option:

pgbench -p 5432 -U x -c 24 -C -T 10 test
pgbench -p 6543 -U x -c 24 -T 10 test

This is with default postgresql.conf, except for

connection_proxies = 4

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#29Dave Cramer
pg@fastcrypt.com
In reply to: Tomas Vondra (#28)
Re: Built-in connection pooler

Responses inline. I just picked up this thread so please bear with me.

On Fri, 26 Jul 2019 at 16:24, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:

Hi Konstantin,

I've started reviewing this patch and experimenting with it, so let me
share some initial thoughts.

1) not handling session state (yet)

I understand handling session state would mean additional complexity, so
I'm OK with not having it in v1. That being said, I think this is the
primary issue with connection pooling on PostgreSQL - configuring and
running a separate pool is not free, of course, but when people complain
to us it's when they can't actually use a connection pool because of
this limitation.

So what are your plans regarding this feature? I think you mentioned
you already have the code in another product. Do you plan to submit it
in the pg13 cycle, or what's the plan? I'm willing to put some effort
into reviewing and testing that.

I too would like to see the plan of how to make this feature complete.

My concern here is that for the pgjdbc client at least *every* connection
does some set parameter so I see from what I can tell from scanning this
thread pooling would not be used at all.I suspect the .net driver does the
same thing.

FWIW it'd be nice to expose it as some sort of interface, so that other
connection pools can leverage it too. There are use cases that don't
work with a built-in connection pool (say, PAUSE/RESUME in pgbouncer
allows restarting the database) so projects like pgbouncer or odyssey
are unlikely to disappear anytime soon.

Agreed, and as for other projects. I see their value in having the pool on
a separate host as being a strength. I certainly don't see them going
anywhere soon. Either way having a unified pooling API would be a useful
goal.

I also wonder if we could make it more permissive even in v1, without
implementing dump/restore of session state.

Consider for example patterns like this:

BEGIN;
SET LOCAL enable_nestloop = off;
...
COMMIT;

or

PREPARE x(int) AS SELECT ...;
EXECUTE x(1);
EXECUTE x(2);
...
EXECUTE x(100000);
DEALLOCATE x;

Again pgjdbc does use server prepared statements so I'm assuming this would
not work for clients using pgjdbc or .net

Additionally we have setSchema, which is really set search_path, again
incompatible.

Regards,

Dave

Show quoted text
#30Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#17)
Re: Built-in connection pooler

On Tue, Jul 16, 2019 at 2:04 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

I have committed patch which emulates epoll EPOLLET flag and so should
avoid busy loop with poll().
I will be pleased if you can check it at FreeBSD box.

I tried your v12 patch and it gets stuck in a busy loop during make
check. You can see it on Linux with ./configure ...
CFLAGS="-DWAIT_USE_POLL".

--
Thomas Munro
https://enterprisedb.com

#31Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#28)
Re: Built-in connection pooler

On 26.07.2019 23:24, Tomas Vondra wrote:

Hi Konstantin,

I've started reviewing this patch and experimenting with it, so let me
share some initial thoughts.

1) not handling session state (yet)

I understand handling session state would mean additional complexity, so
I'm OK with not having it in v1. That being said, I think this is the
primary issue with connection pooling on PostgreSQL - configuring and
running a separate pool is not free, of course, but when people complain
to us it's when they can't actually use a connection pool because of
this limitation.

So what are your plans regarding this feature? I think you mentioned
you already have the code in another product. Do you plan to submit it
in the pg13 cycle, or what's the plan? I'm willing to put some effort
into reviewing and testing that.

I completely agree with you. My original motivation of implementation of
built-in connection pooler
was to be able to preserve session semantic (prepared statements, GUCs,
temporary tables) for pooled connections.
Almost all production system have to use some kind of pooling. But in
case of using pgbouncer&Co we are loosing possibility
to use prepared statements which can cause up to two time performance
penalty (in simple OLTP queries).
So I have implemented such version of connection pooler of PgPro EE.
It require many changes in Postgres core so I realized that there are no
chances to commit in community
(taken in account that may other my patches like autoprepare and libpq
compression are postponed for very long time, although
them are much smaller and less invasive).

Then Dimitri Fontaine proposed me to implement much simple version of
pooler based on traditional proxy approach.
This patch is result of our conversation with Dimitri.
You are asking me about my plans... I think that it will be better to
try first to polish this version of the patch and commit it and only
after it add more sophisticated features
like saving/restoring session state.

FWIW it'd be nice to expose it as some sort of interface, so that other
connection pools can leverage it too. There are use cases that don't
work with a built-in connection pool (say, PAUSE/RESUME in pgbouncer
allows restarting the database) so projects like pgbouncer or odyssey
are unlikely to disappear anytime soon.

Obviously built-in connection pooler will never completely substitute
external poolers like pgbouncer, which provide more flexibility, i.e.
make it possible to install pooler at separate host or at client side.

I also wonder if we could make it more permissive even in v1, without
implementing dump/restore of session state.

Consider for example patterns like this:

 BEGIN;
 SET LOCAL enable_nestloop = off;
 ...
 COMMIT;

or

 PREPARE x(int) AS SELECT ...;
 EXECUTE x(1);
 EXECUTE x(2);
 ...
 EXECUTE x(100000);
 DEALLOCATE x;

or perhaps even

 CREATE FUNCTION f() AS $$ ... $$
 LANGUAGE sql
 SET enable_nestloop = off;

In all those cases (and I'm sure there are other similar examples) the
connection pool considers the session 'tainted' it marks it as tainted
and we never reset that. So even when an application tries to play nice,
it can't use pooling.

Would it be possible to maybe track this with more detail (number of
prepared statements, ignore SET LOCAL, ...)? That should allow us to do
pooling even without full support for restoring session state.

Sorry, I do not completely understand your idea (how to implement this
features without maintaining session state).
To implement prepared statements  we need to store them in session
context or at least add some session specific prefix to prepare
statement name.
Temporary tables also require per-session temporary table space. With
GUCs situation is even more complicated - actually most of the time in
my PgPro-EE pooler version
I have spent in the fight with GUCs (default values, reloading
configuration, memory alllocation/deallocation,...).
But the "show stopper" are temporary tables: if them are accessed
through internal (non-shared buffer), then you can not reschedule
session to some other backend.
This is why I have now patch with implementation of global temporary
tables (a-la Oracle) which has global metadata and are accessed though
shared buffers (which also allows to use them
in parallel queries).

2) configuration

I think we need to rethink how the pool is configured. The options
available at the moment are more a consequence of the implementation and
are rather cumbersome to use in some cases.

For example, we have session_pool_size, which is (essentially) the
number of backends kept in the pool. Which seems fine at first, because
it seems like you might say

   max_connections = 100
   session_pool_size = 50

to say the connection pool will only ever use 50 connections, leaving
the rest for "direct" connection. But that does not work at all, because
the number of backends the pool can open is

   session_pool_size * connection_proxies * databases * roles

which pretty much means there's no limit, because while we can specify
the number of proxies, the number of databases and roles is arbitrary.
And there's no way to restrict which dbs/roles can use the pool.

So you can happily do this

   max_connections = 100
   connection_proxies = 4
   session_pool_size = 10

   pgbench -c 24 -U user1 test1
   pgbench -c 24 -U user2 test2
   pgbench -c 24 -U user3 test3
   pgbench -c 24 -U user4 test4

at which point it's pretty much game over, because each proxy has 4
pools, each with ~6 backends, 96 backends in total. And because
non-tainted connections are never closed, no other users/dbs can use the
pool (will just wait indefinitely).

To allow practical configurations, I think we need to be able to define:

* which users/dbs can use the connection pool
* minimum/maximum pool size per user, per db and per user/db
* maximum number of backend connections

We need to be able to close connections when needed (when not assigned,
and we need the connection for someone else).

Plus those limits need to be global, not "per proxy" - it's just strange
that increasing connection_proxies bumps up the effective pool size.

I don't know what's the best way to specify this configuration - whether
to store it in a separate file, in some system catalog, or what.

Well, I agree with you, that maintaining separate connection pool for
each database/role pain may be confusing.
My assumption was that in many configurations application are accessing
the same (or few databases) with one (or very small) number of users.
If you have hundreds of databases or users (each connection to the
database under its OS name), then
connection pooler will not work in any case, doesn't matter how you will
configure it. It is true also for pgbouncer and any other pooler.
If Postgres backend is able to work only with on database, then you will
have to start at least such number of backends as number of databases
you have.
Situation with users is more obscure - it may be possible to implement
multiuser access to the same backend (as it can be done now using "set
role").

So I am not sure that if we implement sophisticated configurator which
allows to specify in some configuration file for each database/role pair
maximal/optimal number
of workers, then it completely eliminate the problem with multiple
session pools.

Particularly, assume that we have 3 databases and want to server them
with 10 workers.
Now we receive 10 requests to database A. We start 10 backends which
server this queries.
The we receive 10 requests to database B. What should we do then.
Terminate all this 10 backends and start new 10
instead of them? Or should we start 3 workers for database A, 3 workers
for database B and 4 workers for database C.
In this case of most of requests are to database A, we will not be able
to utilize all system resources.
Certainly we can specify in configuration file that database A needs 6
workers and B/C - two workers.
But it will work only in case if we statically know workload...

So I have though a lot about it, but failed to find some good and
flexible solution.
Looks like if you wan to efficiently do connection pooler, you should
restrict number of
database and roles.

3) monitoring

I think we need much better monitoring capabilities. At this point we
have a single system catalog (well, a SRF) giving us proxy-level
summary. But I think we need much more detailed overview - probably
something like pgbouncer has - listing of client/backend sessions, with
various details.

Of course, that's difficult to do when those lists are stored in private
memory of each proxy process - I think we need to move this to shared
memory, which would also help to address some of the issues I mentioned
in the previous section (particularly that the limits need to be global,
not per proxy).

I also agree that more monitoring facilities are needed.
Just want to get better understanding what kind of information we need
to monitor.
As far as pooler is done at transaction level, all non-active session
are in idle state
and state of active sessions can be inspected using pg_stat_activity.

4) restart_pooler_on_reload

I find it quite strange that restart_pooler_on_reload binds restart of
the connection pool to reload of the configuration file. That seems like
a rather surprising behavior, and I don't see why would you ever want
that? Currently it seems like the only way to force the proxies to close
the connections (the docs mention DROP DATABASE), but why shouldn't we
have separate functions to do that? In particular, why would you want to
close connections for all databases and not just for the one you're
trying to drop?

Reload configuration is already broadcasted to all backends.
In case of using some other approach for controlling pool worker,
it will be necessary to implement own notification mechanism.
Certainly it is doable. But as I already wrote, the primary idea was to
minimize
this patch and make it as less invasive as possible.

5) session_schedule

It's nice we support different strategies to assign connections to
worker processes, but how do you tune it? How do you pick the right
option for your workload? We either need to provide metrics to allow
informed decision, or just not provide the option.

The honest answer for this question is "I don't know".
I have just implemented few different policies and assume that people
will test them on their workloads and
tell me which one will be most efficient. Then it will be possible to
give some recommendations how to
choose policies.

Also current criteria for "load-balancing" may be too dubious.
May be formula should include some other metrics rather than just number
of connected clients.

And "load average" may be a bit misleading term (as used in the section
about load-balancing option). It kinda suggests we're measuring how busy
the different proxies were recently (that's what load average in Unix
does) - by counting active processes, CPU usage or whatever.  But AFAICS
that's not what's happening at all - it just counts the connections,
with SSL connections counted as more expensive.

Generally I agree. Current criteria for "load-balancing" may be too dubious.
May be formula should include some other metrics rather than just number
of connected clients.
But I failed to find such metrices. CPU usage? But proxy themselve are
using CPU only for redirecting traffic.
Assume that one proxy is serving 10 clients performing OLAP queries and
another one 100 clients performing OLTP queries.
Certainly OLTP queries are used to be executed much faster. But it is
hard to estimate amount of transferred data for both proxies.
Generally OLTP queries are used to access few records, while OLAP access
much more data. But OLAP queries usually performs some aggregation,
so final result may be also small...

Looks like we need to measure not only load of proxy itself but also
load of proxies connected to this proxy.
But it requires much more efforts.

6) issues during testin

While testing, I've seen a couple of issues. Firstly, after specifying a
db that does not exist:

 psql -h localhost -p 6543 xyz

just hangs and waits forever. In the server log I see this:

 2019-07-25 23:16:50.229 CEST [31296] FATAL:  database "xyz" does not
exist
 2019-07-25 23:16:50.258 CEST [31251] WARNING:  could not setup local
connect to server
 2019-07-25 23:16:50.258 CEST [31251] DETAIL:  FATAL:  database "xyz"
does not exist

But the client somehow does not get the message and waits.

Fixed.

Secondly, when trying this
 pgbench -p 5432 -U x -i -s 1 test
 pgbench -p 6543 -U x -c 24 -C -T 10 test

it very quickly locks up, with plenty of non-granted locks in pg_locks,
but I don't see any interventions by deadlock detector so I presume
the issue is somewhere else. I don't see any such issues whe running
without the connection pool or without the -C option:

 pgbench -p 5432 -U x -c 24 -C -T 10 test
 pgbench -p 6543 -U x -c 24 -T 10 test

This is with default postgresql.conf, except for

 connection_proxies = 4

I need more time to investigate this problem.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#32Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#30)
1 attachment(s)
Re: Built-in connection pooler

On 27.07.2019 14:49, Thomas Munro wrote:

On Tue, Jul 16, 2019 at 2:04 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

I have committed patch which emulates epoll EPOLLET flag and so should
avoid busy loop with poll().
I will be pleased if you can check it at FreeBSD box.

I tried your v12 patch and it gets stuck in a busy loop during make
check. You can see it on Linux with ./configure ...
CFLAGS="-DWAIT_USE_POLL".

--
Thomas Munro
https://enterprisedb.com

New version of the patch is attached which fixes poll() and Win32
implementations of WaitEventSet.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-13.patchtext/x-patch; name=builtin_connection_proxy-13.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a3..2758506 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..a4b2720
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,173 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f112..5b19fef 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7..acbaed3 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a..10a14d0 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad43..57d856f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[]);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..ce8c3a3
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1103 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					StringInfoData msgbuf;
+					initStringInfo(&msgbuf);
+					pq_sendbyte(&msgbuf, 'E');
+					pq_sendint32(&msgbuf, 7 + strlen(error));
+					pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+					pq_sendstring(&msgbuf, error);
+					pq_sendbyte(&msgbuf, '\0');
+					socket_write(chan, msgbuf.data, msgbuf.len);
+					pfree(msgbuf.data);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..c7fc97d 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,6 +592,9 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
 	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -585,6 +609,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +657,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +674,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +715,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +746,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +786,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +831,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +874,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +914,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +924,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +935,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +973,20 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +999,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1016,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1200,11 +1287,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1315,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1412,24 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1495,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1536,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1..62ec2af 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970..16ca58d 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..79001cc 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee..47b3845 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,42 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2239,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2318,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4635,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4821,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4911,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5085,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5101,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5127,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5488,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5783,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5807,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6141,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6204,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6838,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6880,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6942,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +6989,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7564,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7770,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7827,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8108,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8166,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8145,6 +8219,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
+	MyProc->is_tainted = true;
 
 	switch (stmt->kind)
 	{
@@ -8175,7 +8250,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8272,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8290,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8444,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8453,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8535,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8563,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8641,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9354,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9508,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9736,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9971,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10027,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10043,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10399,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10458,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10538,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10679,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10702,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10761,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11197,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11210,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11248,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11278,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11566,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..5f528c1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 96415a9..6d1a926 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..86c0ef8 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2af..8e2079b 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..7f7a92a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c0b8e3f..24569d8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 973691c..bcbfec3 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#33Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#31)
Re: Built-in connection pooler

On Mon, Jul 29, 2019 at 07:14:27PM +0300, Konstantin Knizhnik wrote:

On 26.07.2019 23:24, Tomas Vondra wrote:

Hi Konstantin,

I've started reviewing this patch and experimenting with it, so let me
share some initial thoughts.

1) not handling session state (yet)

I understand handling session state would mean additional complexity, so
I'm OK with not having it in v1. That being said, I think this is the
primary issue with connection pooling on PostgreSQL - configuring and
running a separate pool is not free, of course, but when people complain
to us it's when they can't actually use a connection pool because of
this limitation.

So what are your plans regarding this feature? I think you mentioned
you already have the code in another product. Do you plan to submit it
in the pg13 cycle, or what's the plan? I'm willing to put some effort
into reviewing and testing that.

I completely agree with you. My original motivation of implementation
of built-in connection pooler
was to be able to preserve session semantic (prepared statements,
GUCs, temporary tables) for pooled connections.
Almost all production system have to use some kind of pooling. But in
case of using pgbouncer&Co we are loosing possibility
to use prepared statements which can cause up to two time performance
penalty (in simple OLTP queries).
So I have implemented such version of connection pooler of PgPro EE.
It require many changes in Postgres core so I realized that there are
no chances to commit in community
(taken in account that may other my patches like autoprepare and libpq
compression are postponed for very long time, although
them are much smaller and less invasive).

Then Dimitri Fontaine proposed me to implement much simple version of
pooler based on traditional proxy approach.
This patch is result of our conversation with Dimitri.
You are asking me about my plans... I think that it will be better to
try first to polish this version of the patch and commit it and only
after it add more sophisticated features
like saving/restoring session state.

Well, I understand the history of this patch, and I have no problem with
getting a v1 of a connection pool without this feature. After all,
that's the idea of incremental development. But that only works when v1
allows adding that feature in v2, and I can't quite judge that. Which
is why I've asked you about your plans, because you clearly have more
insight thanks to writing the pooler for PgPro EE.

FWIW it'd be nice to expose it as some sort of interface, so that other
connection pools can leverage it too. There are use cases that don't
work with a built-in connection pool (say, PAUSE/RESUME in pgbouncer
allows restarting the database) so projects like pgbouncer or odyssey
are unlikely to disappear anytime soon.

Obviously built-in connection pooler will never completely substitute
external poolers like pgbouncer, which provide more flexibility, i.e.
make it possible to install pooler at separate host or at client side.

Sure. But that wasn't really my point - I was suggesting to expose this
hypothetical feature (managing session state) as some sort of API usable
from other connection pools.

I also wonder if we could make it more permissive even in v1, without
implementing dump/restore of session state.

Consider for example patterns like this:

�BEGIN;
�SET LOCAL enable_nestloop = off;
�...
�COMMIT;

or

�PREPARE x(int) AS SELECT ...;
�EXECUTE x(1);
�EXECUTE x(2);
�...
�EXECUTE x(100000);
�DEALLOCATE x;

or perhaps even

�CREATE FUNCTION f() AS $$ ... $$
�LANGUAGE sql
�SET enable_nestloop = off;

In all those cases (and I'm sure there are other similar examples) the
connection pool considers the session 'tainted' it marks it as tainted
and we never reset that. So even when an application tries to play nice,
it can't use pooling.

Would it be possible to maybe track this with more detail (number of
prepared statements, ignore SET LOCAL, ...)? That should allow us to do
pooling even without full support for restoring session state.

Sorry, I do not completely understand your idea (how to implement this
features without maintaining session state).

My idea (sorry if it wasn't too clear) was that we might handle some
cases more gracefully.

For example, if we only switch between transactions, we don't quite care
about 'SET LOCAL' (but the current patch does set the tainted flag). The
same thing applies to GUCs set for a function.

For prepared statements, we might count the number of statements we
prepared and deallocated, and treat it as 'not tained' when there are no
statements. Maybe there's some risk I can't think of.

The same thing applies to temporary tables - if you create and drop a
temporary table, is there a reason to still treat the session as tained?

To implement prepared statements� we need to store them in session
context or at least add some session specific prefix to prepare
statement name.
Temporary tables also require per-session temporary table space. With
GUCs situation is even more complicated - actually most of the time in
my PgPro-EE pooler version
I have spent in the fight with GUCs (default values, reloading
configuration, memory alllocation/deallocation,...).
But the "show stopper" are temporary tables: if them are accessed
through internal (non-shared buffer), then you can not reschedule
session to some other backend.
This is why I have now patch with implementation of global temporary
tables (a-la Oracle) which has global metadata and are accessed though
shared buffers (which also allows to use them
in parallel queries).

Yeah, temporary tables are messy. Global temporary tables would be nice,
not just because of this, but also because of catalog bloat.

2) configuration

I think we need to rethink how the pool is configured. The options
available at the moment are more a consequence of the implementation and
are rather cumbersome to use in some cases.

For example, we have session_pool_size, which is (essentially) the
number of backends kept in the pool. Which seems fine at first, because
it seems like you might say

�� max_connections = 100
�� session_pool_size = 50

to say the connection pool will only ever use 50 connections, leaving
the rest for "direct" connection. But that does not work at all, because
the number of backends the pool can open is

�� session_pool_size * connection_proxies * databases * roles

which pretty much means there's no limit, because while we can specify
the number of proxies, the number of databases and roles is arbitrary.
And there's no way to restrict which dbs/roles can use the pool.

So you can happily do this

�� max_connections = 100
�� connection_proxies = 4
�� session_pool_size = 10

�� pgbench -c 24 -U user1 test1
�� pgbench -c 24 -U user2 test2
�� pgbench -c 24 -U user3 test3
�� pgbench -c 24 -U user4 test4

at which point it's pretty much game over, because each proxy has 4
pools, each with ~6 backends, 96 backends in total. And because
non-tainted connections are never closed, no other users/dbs can use the
pool (will just wait indefinitely).

To allow practical configurations, I think we need to be able to define:

* which users/dbs can use the connection pool
* minimum/maximum pool size per user, per db and per user/db
* maximum number of backend connections

We need to be able to close connections when needed (when not assigned,
and we need the connection for someone else).

Plus those limits need to be global, not "per proxy" - it's just strange
that increasing connection_proxies bumps up the effective pool size.

I don't know what's the best way to specify this configuration - whether
to store it in a separate file, in some system catalog, or what.

Well, I agree with you, that maintaining separate connection pool for
each database/role pain may be confusing.

Anything can be confusing ...

My assumption was that in many configurations application are
accessing the same (or few databases) with one (or very small) number
of users.
If you have hundreds of databases or users (each connection to the
database under its OS name), then
connection pooler will not work in any case, doesn't matter how you
will configure it. It is true also for pgbouncer and any other pooler.

Sure, but I don't expect connection pool to work in such cases.

But I do expect to be able to configure which users can use the
connection pool at all, and maybe assign them different pool sizes.

If Postgres backend is able to work only with on database, then you
will have to start at least such number of backends as number of
databases you have.
Situation with users is more obscure - it may be possible to implement
multiuser access to the same backend (as it can be done now using "set
role").

I don't think I've said we need anything like that. The way I'd expect
it to work that when we run out of backend connections, we terminate
some existing ones (and then fork new backends).

So I am not sure that if we implement sophisticated configurator which
allows to specify in some configuration file for each database/role
pair maximal/optimal number
of workers, then it completely eliminate the problem with multiple
session pools.

Why would we need to invent any sophisticated configurator? Why couldn't
we use some version of what pgbouncer already does, or maybe integrate
it somehow into pg_hba.conf?

Particularly, assume that we have 3 databases and want to server them
with 10 workers.
Now we receive 10 requests to database A. We start 10 backends which
server this queries.
The we receive 10 requests to database B. What should we do then.
Terminate all this 10 backends and start new 10
instead of them? Or should we start 3 workers for database A, 3
workers for database B and 4 workers for database C.
In this case of most of requests are to database A, we will not be
able to utilize all system resources.
Certainly we can specify in configuration file that database A needs 6
workers and B/C - two workers.
But it will work only in case if we statically know workload...

My concern is not as much performance as inability to access the
database at all. There's no reasonable way to "guarantee" some number of
connections to a given database. Which is what pgbouncer does (through
min_pool_size).

Yes, it requires knowledge of the workload, and I don't think that's an
issue.

So I have though a lot about it, but failed to find some good and
flexible solution.
Looks like if you wan to efficiently do connection pooler, you should
restrict number of
database and roles.

I agree we should not over-complicate this, but I still find the current
configuration insufficient.

3) monitoring

I think we need much better monitoring capabilities. At this point we
have a single system catalog (well, a SRF) giving us proxy-level
summary. But I think we need much more detailed overview - probably
something like pgbouncer has - listing of client/backend sessions, with
various details.

Of course, that's difficult to do when those lists are stored in private
memory of each proxy process - I think we need to move this to shared
memory, which would also help to address some of the issues I mentioned
in the previous section (particularly that the limits need to be global,
not per proxy).

I also agree that more monitoring facilities are needed.
Just want to get better understanding what kind of information we need
to monitor.
As far as pooler is done at transaction level, all non-active session
are in idle state
and state of active sessions can be inspected using pg_stat_activity.

Except when sessions are tainted, for example. And when the transactions
are long-running, it's still useful to list the connections.

I'd suggest looking at the stats available in pgbouncer, most of that
actually comes from practice (monitoring metrics, etc.)

4) restart_pooler_on_reload

I find it quite strange that restart_pooler_on_reload binds restart of
the connection pool to reload of the configuration file. That seems like
a rather surprising behavior, and I don't see why would you ever want
that? Currently it seems like the only way to force the proxies to close
the connections (the docs mention DROP DATABASE), but why shouldn't we
have separate functions to do that? In particular, why would you want to
close connections for all databases and not just for the one you're
trying to drop?

Reload configuration is already broadcasted to all backends.
In case of using some other approach for controlling pool worker,
it will be necessary to implement own notification mechanism.
Certainly it is doable. But as I already wrote, the primary idea was
to minimize
this patch and make it as less invasive as possible.

OK

5) session_schedule

It's nice we support different strategies to assign connections to
worker processes, but how do you tune it? How do you pick the right
option for your workload? We either need to provide metrics to allow
informed decision, or just not provide the option.

The honest answer for this question is "I don't know".
I have just implemented few different policies and assume that people
will test them on their workloads and
tell me which one will be most efficient. Then it will be possible to
give some recommendations how to
choose policies.

Also current criteria for "load-balancing" may be too dubious.
May be formula should include some other metrics rather than just
number of connected clients.

OK

And "load average" may be a bit misleading term (as used in the section
about load-balancing option). It kinda suggests we're measuring how busy
the different proxies were recently (that's what load average in Unix
does) - by counting active processes, CPU usage or whatever.� But AFAICS
that's not what's happening at all - it just counts the connections,
with SSL connections counted as more expensive.

Generally I agree. Current criteria for "load-balancing" may be too dubious.
May be formula should include some other metrics rather than just
number of connected clients.
But I failed to find such metrices. CPU usage? But proxy themselve are
using CPU only for redirecting traffic.
Assume that one proxy is serving 10 clients performing OLAP queries
and another one 100 clients performing OLTP queries.
Certainly OLTP queries are used to be executed much faster. But it is
hard to estimate amount of transferred data for both proxies.
Generally OLTP queries are used to access few records, while OLAP
access much more data. But OLAP queries usually performs some
aggregation,
so final result may be also small...

Looks like we need to measure not only load of proxy itself but also
load of proxies connected to this proxy.
But it requires much more efforts.

I think "smart" load-balancing is fairly difficult to get right. I'd
just cut it from initial patch, keeping just the simple strategies
(random, round-robin).

6) issues during testin

While testing, I've seen a couple of issues. Firstly, after specifying a
db that does not exist:

�psql -h localhost -p 6543 xyz

just hangs and waits forever. In the server log I see this:

�2019-07-25 23:16:50.229 CEST [31296] FATAL:� database "xyz" does
not exist
�2019-07-25 23:16:50.258 CEST [31251] WARNING:� could not setup
local connect to server
�2019-07-25 23:16:50.258 CEST [31251] DETAIL:� FATAL:� database
"xyz" does not exist

But the client somehow does not get the message and waits.

Fixed.

Secondly, when trying this
�pgbench -p 5432 -U x -i -s 1 test
�pgbench -p 6543 -U x -c 24 -C -T 10 test

it very quickly locks up, with plenty of non-granted locks in pg_locks,
but I don't see any interventions by deadlock detector so I presume
the issue is somewhere else. I don't see any such issues whe running
without the connection pool or without the -C option:

�pgbench -p 5432 -U x -c 24 -C -T 10 test
�pgbench -p 6543 -U x -c 24 -T 10 test

This is with default postgresql.conf, except for

�connection_proxies = 4

I need more time to investigate this problem.

OK

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#34Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#33)
1 attachment(s)
Re: Built-in connection pooler

On 30.07.2019 4:02, Tomas Vondra wrote:

My idea (sorry if it wasn't too clear) was that we might handle some
cases more gracefully.

For example, if we only switch between transactions, we don't quite care
about 'SET LOCAL' (but the current patch does set the tainted flag). The
same thing applies to GUCs set for a function.
For prepared statements, we might count the number of statements we
prepared and deallocated, and treat it as 'not tained' when there are no
statements. Maybe there's some risk I can't think of.

The same thing applies to temporary tables - if you create and drop a
temporary table, is there a reason to still treat the session as tained?

I already handling temporary tables with transaction scope (created
using "create temp table ... on commit drop" command) - backend is not
marked as tainted in this case.
Thank you for your notice about "set local" command - attached patch is
also handling such GUCs.

To implement prepared statements  we need to store them in session
context or at least add some session specific prefix to prepare
statement name.
Temporary tables also require per-session temporary table space. With
GUCs situation is even more complicated - actually most of the time
in my PgPro-EE pooler version
I have spent in the fight with GUCs (default values, reloading
configuration, memory alllocation/deallocation,...).
But the "show stopper" are temporary tables: if them are accessed
through internal (non-shared buffer), then you can not reschedule
session to some other backend.
This is why I have now patch with implementation of global temporary
tables (a-la Oracle) which has global metadata and are accessed
though shared buffers (which also allows to use them
in parallel queries).

Yeah, temporary tables are messy. Global temporary tables would be nice,
not just because of this, but also because of catalog bloat.

Global temp tables solves two problems:
1. catalog bloating
2. parallel query execution.

Them are not solving problem with using temporary tables at replica.
May be this problem can be solved by implementing special table access
method for temporary tables.
But I am still no sure how useful will be such implementation of special
table access method for temporary tables.
Obviously it requires much more efforts (need to reimplement a lot of
heapam stuff).
But it will allow to eliminate MVCC overhead for temporary tuple and may
be also reduce space by reducing size of tuple header.

If Postgres backend is able to work only with on database, then you
will have to start at least such number of backends as number of
databases you have.
Situation with users is more obscure - it may be possible to
implement multiuser access to the same backend (as it can be done now
using "set role").

I don't think I've said we need anything like that. The way I'd expect
it to work that when we run out of backend connections, we terminate
some existing ones (and then fork new backends).

I afraid that it may eliminate most of positive effect of session
pooling if we will  terminate and launch new backends without any
attempt to bind backends to database and reuse them.

So I am not sure that if we implement sophisticated configurator
which allows to specify in some configuration file for each
database/role pair maximal/optimal number
of workers, then it completely eliminate the problem with multiple
session pools.

Why would we need to invent any sophisticated configurator? Why couldn't
we use some version of what pgbouncer already does, or maybe integrate
it somehow into pg_hba.conf?

I didn't think about such possibility.
But I suspect many problems with reusing pgbouncer code and moving it to
Postgres core.

I also agree that more monitoring facilities are needed.

Just want to get better understanding what kind of information we
need to monitor.
As far as pooler is done at transaction level, all non-active session
are in idle state
and state of active sessions can be inspected using pg_stat_activity.

Except when sessions are tainted, for example. And when the transactions
are long-running, it's still useful to list the connections.

Tainted backends are very similar with normal postgres backends.
The only difference is that them are still connected with client though
proxy.
What I wanted to say is that pg_stat_activity will show you information
about all active transactions
even in case of connection polling.  You will no get information about
pended sessions, waiting for
idle backends. But such session do not have any state (transaction is
not started yet). So there is no much useful information
we can show about them except just number of such pended sessions.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-14.patchtext/x-patch; name=builtin_connection_proxy-14.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a3..2758506 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,123 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..a4b2720
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,173 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f112..5b19fef 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7..acbaed3 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a..10a14d0 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad43..049a76d 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..ce8c3a3
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1103 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+			chan->peer->peer = NULL;
+		chan->pool->n_idle_clients += 1;
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					StringInfoData msgbuf;
+					initStringInfo(&msgbuf);
+					pq_sendbyte(&msgbuf, 'E');
+					pq_sendint32(&msgbuf, 7 + strlen(error));
+					pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+					pq_sendstring(&msgbuf, error);
+					pq_sendbyte(&msgbuf, '\0');
+					socket_write(chan, msgbuf.data, msgbuf.len);
+					pfree(msgbuf.data);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+	} else {
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next) {
+			if (*ipp == chan) {
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		n_ready = WaitEventSetWait(proxy->wait_events, PROXY_WAIT_TIMEOUT, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[9];
+	bool  nulls[9];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[7] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[8] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 8; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..c36e9a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +997,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1014,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1..62ec2af 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970..16ca58d 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..79001cc 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,14 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +153,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee..60d4d8c 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,42 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2239,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2318,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4614,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4635,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4821,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4911,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5085,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5101,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5127,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5488,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5783,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5807,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6141,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6204,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6838,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6880,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6942,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +6989,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7564,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7770,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7827,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8108,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8166,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8146,6 +8220,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
@@ -8175,7 +8252,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8274,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8292,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8446,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8455,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8537,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8565,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8643,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9356,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9510,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9738,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9973,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10029,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10045,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10401,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10460,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10540,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10681,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10704,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10763,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11199,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11212,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11250,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11280,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11568,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..5f528c1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 96415a9..6d1a926 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..86c0ef8 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,19 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2af..8e2079b 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..7f7a92a
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c0b8e3f..24569d8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 973691c..bcbfec3 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#35Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#28)
Re: Built-in connection pooler

On 26.07.2019 23:24, Tomas Vondra wrote:

Secondly, when trying this
 pgbench -p 5432 -U x -i -s 1 test
 pgbench -p 6543 -U x -c 24 -C -T 10 test

it very quickly locks up, with plenty of non-granted locks in pg_locks,
but I don't see any interventions by deadlock detector so I presume
the issue is somewhere else. I don't see any such issues whe running
without the connection pool or without the -C option:

 pgbench -p 5432 -U x -c 24 -C -T 10 test
 pgbench -p 6543 -U x -c 24 -T 10 test

This is with default postgresql.conf, except for

 connection_proxies = 4

After some investigation I tend to think that it is problem of pgbench.
It synchronously establishes new connection:

#0  0x00007f022edb7730 in __poll_nocancel () at
../sysdeps/unix/syscall-template.S:84
#1  0x00007f022f7ceb77 in pqSocketPoll (sock=4, forRead=1, forWrite=0,
end_time=-1) at fe-misc.c:1164
#2  0x00007f022f7cea32 in pqSocketCheck (conn=0x1273bf0, forRead=1,
forWrite=0, end_time=-1) at fe-misc.c:1106
#3  0x00007f022f7ce8f2 in pqWaitTimed (forRead=1, forWrite=0,
conn=0x1273bf0, finish_time=-1) at fe-misc.c:1038
#4  0x00007f022f7c0cdb in connectDBComplete (conn=0x1273bf0) at
fe-connect.c:2029
#5  0x00007f022f7be71f in PQconnectdbParams (keywords=0x7ffc1add5640,
values=0x7ffc1add5680, expand_dbname=1) at fe-connect.c:619
#6  0x0000000000403a4e in doConnect () at pgbench.c:1185
#7  0x0000000000407715 in advanceConnectionState (thread=0x1268570,
st=0x1261778, agg=0x7ffc1add5880) at pgbench.c:2919
#8  0x000000000040f1b1 in threadRun (arg=0x1268570) at pgbench.c:6121
#9  0x000000000040e59d in main (argc=10, argv=0x7ffc1add5f98) at
pgbench.c:5848

I.e. is starts normal transaction in one connection (few
select/update/insert statement which are part of pgbench standard
transaction)
and at the same time tries to establish new connection.
As far as built-in connection pooler performs transaction level
scheduling, first session is grabbing backend until end of transaction.
So until this transaction is
completed backend will not be able to process some other transaction or
accept new connection. But pgbench is completing this transaction
because it is blocked
in waiting response for new connection.

The problem can be easily reproduced with just two connections if
connection_proxies=1 and session_pool_size=1:

 pgbench -p 6543 -n -c 2 -C -T 10 postgres
<hanged>

knizhnik@knizhnik:~/postgrespro.ee11$ ps aux | fgrep postgres
knizhnik 14425  0.0  0.1 172220 17540 ?        Ss   09:48   0:00
/home/knizhnik/postgresql.builtin_pool/dist/bin/postgres -D pgsql.proxy
knizhnik 14427  0.0  0.0 183440  5052 ?        Ss   09:48   0:00
postgres: connection proxy
knizhnik 14428  0.0  0.0 172328  4580 ?        Ss   09:48   0:00
postgres: checkpointer
knizhnik 14429  0.0  0.0 172220  4892 ?        Ss   09:48   0:00
postgres: background writer
knizhnik 14430  0.0  0.0 172220  7692 ?        Ss   09:48   0:00
postgres: walwriter
knizhnik 14431  0.0  0.0 172760  5640 ?        Ss   09:48   0:00
postgres: autovacuum launcher
knizhnik 14432  0.0  0.0  26772  2292 ?        Ss   09:48   0:00
postgres: stats collector
knizhnik 14433  0.0  0.0 172764  5884 ?        Ss   09:48   0:00
postgres: logical replication launcher
knizhnik 14434  0.0  0.0  22740  3084 pts/21   S+   09:48   0:00 pgbench
-p 6543 -n -c 2 -C -T 10 postgres
knizhnik 14435  0.0  0.0 173828 13400 ?        Ss   09:48   0:00
postgres: knizhnik postgres [local] idle in transaction
knizhnik 21927  0.0  0.0  11280   936 pts/19   S+   11:58   0:00 grep -F
--color=auto postgres

But if you run each connection in separate thread, then this test is
normally completed:

nizhnik@knizhnik:~/postgresql.builtin_pool$ pgbench -p 6543 -n -j 2 -c 2
-C -T 10 postgres
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1
query mode: simple
number of clients: 2
number of threads: 2
duration: 10 s
number of transactions actually processed: 9036
latency average = 2.214 ms
tps = 903.466234 (including connections establishing)
tps = 1809.317395 (excluding connections establishing)

--

Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#36Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#34)
Re: Built-in connection pooler

On Tue, Jul 30, 2019 at 01:01:48PM +0300, Konstantin Knizhnik wrote:

On 30.07.2019 4:02, Tomas Vondra wrote:

My idea (sorry if it wasn't too clear) was that we might handle some
cases more gracefully.

For example, if we only switch between transactions, we don't quite care
about 'SET LOCAL' (but the current patch does set the tainted flag). The
same thing applies to GUCs set for a function.
For prepared statements, we might count the number of statements we
prepared and deallocated, and treat it as 'not tained' when there are no
statements. Maybe there's some risk I can't think of.

The same thing applies to temporary tables - if you create and drop a
temporary table, is there a reason to still treat the session as tained?

I already handling temporary tables with transaction scope (created
using "create temp table ... on commit drop" command) - backend is not
marked as tainted in this case.
Thank you for your notice about "set local" command - attached patch
is also handling such GUCs.

Thanks.

To implement prepared statements� we need to store them in session
context or at least add some session specific prefix to prepare
statement name.
Temporary tables also require per-session temporary table space.
With GUCs situation is even more complicated - actually most of
the time in my PgPro-EE pooler version
I have spent in the fight with GUCs (default values, reloading
configuration, memory alllocation/deallocation,...).
But the "show stopper" are temporary tables: if them are accessed
through internal (non-shared buffer), then you can not reschedule
session to some other backend.
This is why I have now patch with implementation of global
temporary tables (a-la Oracle) which has global metadata and are
accessed though shared buffers (which also allows to use them
in parallel queries).

Yeah, temporary tables are messy. Global temporary tables would be nice,
not just because of this, but also because of catalog bloat.

Global temp tables solves two problems:
1. catalog bloating
2. parallel query execution.

Them are not solving problem with using temporary tables at replica.
May be this problem can be solved by implementing special table access
method for temporary tables.
But I am still no sure how useful will be such implementation of
special table access method for temporary tables.
Obviously it requires much more efforts (need to reimplement a lot of
heapam stuff).
But it will allow to eliminate MVCC overhead for temporary tuple and
may be also reduce space by reducing size of tuple header.

Sure. Temporary tables are a hard issue (another place where they cause
trouble are 2PC transactions, IIRC), so I think it's perfectly sensible to
accept the limitation, handle cases that we can handle and see if we can
improve the remaining cases later.

If Postgres backend is able to work only with on database, then
you will have to start at least such number of backends as number
of databases you have.
Situation with users is more obscure - it may be possible to
implement multiuser access to the same backend (as it can be done
now using "set role").

Yes, that's a direct consequence of the PostgreSQL process model.

I don't think I've said we need anything like that. The way I'd expect
it to work that when we run out of backend connections, we terminate
some existing ones (and then fork new backends).

I afraid that it may eliminate most of positive effect of session
pooling if we will� terminate and launch new backends without any
attempt to bind backends to database and reuse them.

I'm not sure I understand. Surely we'd still reuse connections as much as
possible - we'd still keep the per-db/user connection pools, but after
running out of backend connections we'd pick a victim in one of the pools,
close it and open a new connection.

We'd need some logic for picking the 'victim' but that does not seem
particularly hard - idle connections first, then connections from
"oversized" pools (this is one of the reasons why pgbouncer has
min_connection_pool).

So I am not sure that if we implement sophisticated configurator
which allows to specify in some configuration file for each
database/role pair maximal/optimal number
of workers, then it completely eliminate the problem with multiple
session pools.

Why would we need to invent any sophisticated configurator? Why couldn't
we use some version of what pgbouncer already does, or maybe integrate
it somehow into pg_hba.conf?

I didn't think about such possibility.
But I suspect many problems with reusing pgbouncer code and moving it
to Postgres core.

To be clear - I wasn't suggesting to copy any code from pgbouncer. It's
far too different (style, ...) compared to core. I'm suggesting to adopt
roughly the same configuration approach, i.e. what parameters are allowed
for each pool, global limits, etc.

I don't know whether we should have a separate configuration file, make it
part of pg_hba.conf somehow, or store the configuration in a system
catalog. But I'm pretty sure we don't need a "sophisticated configurator".

I also agree that more monitoring facilities are needed.

Just want to get better understanding what kind of information we
need to monitor.
As far as pooler is done at transaction level, all non-active
session are in idle state
and state of active sessions can be inspected using pg_stat_activity.

Except when sessions are tainted, for example. And when the transactions
are long-running, it's still useful to list the connections.

Tainted backends are very similar with normal postgres backends.
The only difference is that them are still connected with client
though proxy.
What I wanted to say is that pg_stat_activity will show you
information about all active transactions
even in case of connection polling.� You will no get information about
pended sessions, waiting for
idle backends. But such session do not have any state (transaction is
not started yet). So there is no much useful information
we can show about them except just number of such pended sessions.

I suggest you take a look at metrics used for pgbouncer monitoring. Even
when you have pending connection, you can still get useful data about that
(e.g. average wait time to get a backend, maximum wait time, ...).

Furthermore, how will you know from pg_stat_activity whether a connection
is coming through a connection pool or not? Or that it's (not) tainted? Or
how many backends are used by all connection pools combined?

Because those are questions people will be asking when investigating
issues, and so on.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#37Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#26)
1 attachment(s)
Re: Built-in connection pooler

On 26.07.2019 19:20, Ryan Lambert wrote:

PgPRO EE version of connection pooler has "idle_pool_worker_timeout"
parameter which allows to terminate idle workers.

+1

I have implemented idle_pool_worker_timeout.
Also I added _idle_clients and n_idle_backends fields to proxy statistic
returned by pg_pooler_state() function.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-15.patchtext/x-patch; name=builtin_connection_proxy-15.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a3..acd7041 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,137 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..bc6547b
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f112..5b19fef 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7..acbaed3 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fd67d2a..10a14d0 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 688ad43..049a76d 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..57e3ba5
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1156 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					StringInfoData msgbuf;
+					initStringInfo(&msgbuf);
+					pq_sendbyte(&msgbuf, 'E');
+					pq_sendint32(&msgbuf, 7 + strlen(error));
+					pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+					pq_sendstring(&msgbuf, error);
+					pq_sendbyte(&msgbuf, '\0');
+					socket_write(chan, msgbuf.data, msgbuf.len);
+					pfree(msgbuf.data);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (!peer)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i <= 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..c36e9a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +997,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1014,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1..62ec2af 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4217,6 +4217,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ffd1970..16ca58d 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -658,6 +659,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
 static void
 PreventAdvisoryLocksInParallelMode(void)
 {
+	MyProc->is_tainted = true;
 	if (IsInParallelMode())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..b128b9c 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee..24b0d22 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2250,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2329,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4625,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4646,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4832,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4922,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5096,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5112,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5138,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5499,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5794,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5818,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6152,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6215,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6849,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6891,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6953,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +7000,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7575,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7781,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7838,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8119,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8177,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8146,6 +8231,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
@@ -8175,7 +8263,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8285,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8303,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8457,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8466,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8548,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8576,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8654,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9367,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9521,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9749,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9984,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10040,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10056,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10412,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10471,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10551,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10692,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10715,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10774,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11210,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11223,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11261,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11291,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11579,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..a3773b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10677,4 +10677,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 96415a9..6d1a926 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..7a93bf4 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,20 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 8ccd2af..8e2079b 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c0b8e3f..24569d8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 973691c..bcbfec3 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#38Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#1)
1 attachment(s)
Re: Built-in connection pooler

On 02.08.2019 12:57, DEV_OPS wrote:

Hello Konstantin

would you please re-base this patch? I'm going to test it, and back port
into PG10 stable and PG9 stable

thank you very much

Thank you.
Rebased patch is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-16.patchtext/x-patch; name=builtin_connection_proxy-16.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..119daac 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,137 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..bc6547b
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..5f2cd5f 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..0d1df3c 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..156a91d
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1156 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	return true;
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					StringInfoData msgbuf;
+					initStringInfo(&msgbuf);
+					pq_sendbyte(&msgbuf, 'E');
+					pq_sendint32(&msgbuf, 7 + strlen(error));
+					pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+					pq_sendstring(&msgbuf, error);
+					pq_sendbyte(&msgbuf, '\0');
+					socket_write(chan, msgbuf.data, msgbuf.len);
+					pfree(msgbuf.data);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (!peer)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..c36e9a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +997,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1014,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..b128b9c 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..abac1cd 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2250,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2329,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4625,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4646,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4832,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4922,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5096,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5112,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5138,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5499,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5794,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5818,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6152,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6215,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6849,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6891,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6953,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +7000,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7575,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7781,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7838,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8119,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8177,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8146,6 +8231,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
@@ -8175,7 +8263,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8285,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8303,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8457,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8466,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8548,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8576,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8654,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9367,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9521,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9749,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9984,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10040,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10056,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10412,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10471,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10551,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10692,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10715,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10774,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11210,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11223,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11261,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11291,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11579,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..7a93bf4 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,20 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#39Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#38)
Re: Built-in connection pooler

Hi Konstantin,

I did some testing with the latest patch [1]/messages/by-id/attachment/103046/builtin_connection_proxy-16.patch on a small local VM with 1 CPU
and 2GB RAM with the intention of exploring pg_pooler_state().

Configuration:

idle_pool_worker_timeout = 0 (default)
connection_proxies = 2
max_sessions = 10 (default)
max_connections = 1000

Initialized pgbench w/ scale 10 for the small server.

Running pgbench w/out connection pooler with 300 connections:

pgbench -p 5432 -c 300 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
progress: 15.0 s, 1343.3 tps, lat 123.097 ms stddev 380.780
progress: 30.0 s, 1086.7 tps, lat 155.586 ms stddev 376.963
progress: 45.1 s, 1103.8 tps, lat 156.644 ms stddev 347.058
progress: 60.6 s, 652.6 tps, lat 271.060 ms stddev 575.295
transaction type: <builtin: select only>
scaling factor: 10
query mode: simple
number of clients: 300
number of threads: 1
duration: 60 s
number of transactions actually processed: 63387
latency average = 171.079 ms
latency stddev = 439.735 ms
tps = 1000.918781 (including connections establishing)
tps = 1000.993926 (excluding connections establishing)

It crashes when I attempt to run with the connection pooler, 300
connections:

pgbench -p 6543 -c 300 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
connection to database "bench_test" failed:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

In the logs I get:

WARNING: PROXY: Failed to add new client - too much sessions: 18 clients,
1 backends. Try to increase 'max_sessions' configuration parameter.

The logs report 1 backend even though max_sessions is the default of 10.
Why is there only 1 backend reported? Is that error message getting the
right value?

Minor grammar fix, the logs on this warning should say "too many sessions"
instead of "too much sessions."

Reducing pgbench to only 30 connections keeps it from completely crashing
but it still does not run successfully.

pgbench -p 6543 -c 30 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
client 9 aborted in command 1 (SQL) of script 0; perhaps the backend died
while processing
client 11 aborted in command 1 (SQL) of script 0; perhaps the backend died
while processing
client 13 aborted in command 1 (SQL) of script 0; perhaps the backend died
while processing
...
...
progress: 15.0 s, 5734.5 tps, lat 1.191 ms stddev 10.041
progress: 30.0 s, 7789.6 tps, lat 0.830 ms stddev 6.251
progress: 45.0 s, 8211.3 tps, lat 0.810 ms stddev 5.970
progress: 60.0 s, 8466.5 tps, lat 0.789 ms stddev 6.151
transaction type: <builtin: select only>
scaling factor: 10
query mode: simple
number of clients: 30
number of threads: 1
duration: 60 s
number of transactions actually processed: 453042
latency average = 0.884 ms
latency stddev = 7.182 ms
tps = 7549.373416 (including connections establishing)
tps = 7549.402629 (excluding connections establishing)
Run was aborted; the above results are incomplete.

Logs for that run show (truncated):

2019-08-07 00:19:37.707 UTC [22152] WARNING: PROXY: Failed to add new
client - too much sessions: 18 clients, 1 backends. Try to increase
'max_sessions' configuration parameter.
2019-08-07 00:31:10.467 UTC [22151] WARNING: PROXY: Failed to add new
client - too much sessions: 15 clients, 4 backends. Try to increase
'max_sessions' configuration parameter.
2019-08-07 00:31:10.468 UTC [22152] WARNING: PROXY: Failed to add new
client - too much sessions: 15 clients, 4 backends. Try to increase
'max_sessions' configuration parameter.
...
...

Here it is reporting fewer clients with more backends. Still, only 4
backends reported with 15 clients doesn't seem right. Looking at the
results from pg_pooler_state() at the same time (below) showed 5 and 7
backends for the two different proxies, so why are the logs only reporting
4 backends when pg_pooler_state() reports 12 total?

Why is n_idle_clients negative? In this case it showed -21 and -17. Each
proxy reported 7 clients, with max_sessions = 10, having those
n_idle_client results doesn't make sense to me.

postgres=# SELECT * FROM pg_pooler_state();
pid | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | n_idle_backends | n_idle_clients | tx_bytes |
rx_bytes | n_transactions

-------+-----------+---------------+---------+------------+----------------------+-----------------+----------------+----------+----------+---------------
-
25737 | 7 | 0 | 1 | 5 |
0 | 0 | -21 | 4099541 | 3896792 |
61959
25738 | 7 | 0 | 1 | 7 |
0 | 2 | -17 | 4530587 | 4307474 |
68490
(2 rows)

I get errors running pgbench down to only 20 connections with this
configuration. I tried adjusting connection_proxies = 1 and it handles even
fewer connections. Setting connection_proxies = 4 allows it to handle 20
connections without error, but by 40 connections it starts having issues.

While I don't have expectations of this working great (or even decent) on a
tiny server, I don't expect it to crash in a case where the standard
connections work. Also, the logs and the function both show that the total
backends is less than the total available and the two don't seem to agree
on the details.

I think it would help to have details about the pg_pooler_state function
added to the docs, maybe in this section [2]https://www.postgresql.org/docs/current/functions-info.html?

I'll take some time later this week to examine pg_pooler_state further on a
more appropriately sized server.

Thanks,

[1]: /messages/by-id/attachment/103046/builtin_connection_proxy-16.patch
/messages/by-id/attachment/103046/builtin_connection_proxy-16.patch
[2]: https://www.postgresql.org/docs/current/functions-info.html

Ryan Lambert

Show quoted text
#40Li Japin
japinli@hotmail.com
In reply to: Konstantin Knizhnik (#38)
Re: Built-in connection pooler

Hi, Konstantin

I test the patch-16 on postgresql master branch, and I find the
temporary table
cannot removed when we re-connect to it. Here is my test:

japin@ww-it:~/WwIT/postgresql/Debug/connpool$ initdb
The files belonging to this database system will be owned by user "japin".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /home/japin/WwIT/postgresql/Debug/connpool/DATA ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Asia/Shanghai
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /home/japin/WwIT/postgresql/Debug/connpool/DATA -l
logfile start

japin@ww-it:~/WwIT/postgresql/Debug/connpool$ pg_ctl -l /tmp/log start
waiting for server to start.... done
server started
japin@ww-it:~/WwIT/postgresql/Debug/connpool$ psql postgres
psql (13devel)
Type "help" for help.

postgres=# ALTER SYSTEM SET connection_proxies TO 1;
ALTER SYSTEM
postgres=# ALTER SYSTEM SET session_pool_size TO 1;
ALTER SYSTEM
postgres=# \q
japin@ww-it:~/WwIT/postgresql/Debug/connpool$ pg_ctl -l /tmp/log restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
japin@ww-it:~/WwIT/postgresql/Debug/connpool$ psql -p 6543 postgres
psql (13devel)
Type "help" for help.

postgres=# CREATE TEMP TABLE test(id int, info text);
CREATE TABLE
postgres=# INSERT INTO test SELECT id, md5(id::text) FROM
generate_series(1, 10) id;
INSERT 0 10
postgres=# select * from pg_pooler_state();
 pid  | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | n_idle_backends | n_idle_clients | tx_bytes |
rx_bytes | n_transactions
------+-----------+---------------+---------+------------+----------------------+-----------------+----------------+----------+----------+----------------
 3885 |         1 |             0 |       1 |          1
|                    0 |               0 |              0 | 1154 |    
2880 |              6
(1 row)

postgres=# \q
japin@ww-it:~/WwIT/postgresql/Debug/connpool$ psql -p 6543 postgres
psql (13devel)
Type "help" for help.

postgres=# \d
        List of relations
  Schema   | Name | Type  | Owner
-----------+------+-------+-------
 pg_temp_3 | test | table | japin
(1 row)

postgres=# select * from pg_pooler_state();
 pid  | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | n_idle_backends | n_idle_clients | tx_bytes |
rx_bytes | n_transactions
------+-----------+---------------+---------+------------+----------------------+-----------------+----------------+----------+----------+----------------
 3885 |         1 |             0 |       1 |          1
|                    0 |               0 |              0 | 2088 |    
3621 |              8
(1 row)

postgres=# select * from test ;
 id |               info
----+----------------------------------
  1 | c4ca4238a0b923820dcc509a6f75849b
  2 | c81e728d9d4c2f636f067f89cc14862c
  3 | eccbc87e4b5ce2fe28308fd9f2a7baf3
  4 | a87ff679a2f3e71d9181a67b7542122c
  5 | e4da3b7fbbce2345d7772b0674a318d5
  6 | 1679091c5a880faf6fb5e6087eb1b2dc
  7 | 8f14e45fceea167a5a36dedd4bea2543
  8 | c9f0f895fb98ab9159f51fd0297e236d
  9 | 45c48cce2e2d7fbdea1afc51c7c6ad26
 10 | d3d9446802a44259755d38e6d163e820
(10 rows)

I inspect the code, and find the following code in DefineRelation function:

if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
        && stmt->oncommit != ONCOMMIT_DROP)
        MyProc->is_tainted = true;

For temporary table, MyProc->is_tainted might be true, I changed it as
following:

if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
        || stmt->oncommit == ONCOMMIT_DROP)
        MyProc->is_tainted = true;

For temporary table, it works. I not sure the changes is right.

Show quoted text

On 8/2/19 7:05 PM, Konstantin Knizhnik wrote:

On 02.08.2019 12:57, DEV_OPS wrote:

Hello Konstantin

would you please re-base this patch? I'm going to test it, and back port
into PG10 stable and PG9 stable

thank you very much

Thank you.
Rebased patch is attached.

#41Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Li Japin (#40)
Re: Built-in connection pooler

Hi, Li

Thank you very much for reporting the problem.

On 07.08.2019 7:21, Li Japin wrote:

I inspect the code, and find the following code in DefineRelation function:

if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
        && stmt->oncommit != ONCOMMIT_DROP)
        MyProc->is_tainted = true;

For temporary table, MyProc->is_tainted might be true, I changed it as
following:

if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
        || stmt->oncommit == ONCOMMIT_DROP)
        MyProc->is_tainted = true;

For temporary table, it works. I not sure the changes is right.

Sorry, it is really a bug.
My intention was to mark backend as tainted if it is creating temporary
table without ON COMMIT DROP clause (in the last case temporary table
will be local to transaction and so cause no problems with pooler).
Th right condition is:

    if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
        && stmt->oncommit != ONCOMMIT_DROP)
        MyProc->is_tainted = true;

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#42Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#39)
1 attachment(s)
Re: Built-in connection pooler

Hi Ryan,

On 07.08.2019 6:18, Ryan Lambert wrote:

Hi Konstantin,

I did some testing with the latest patch [1] on a small local VM with
1 CPU and 2GB RAM with the intention of exploring pg_pooler_state().

Configuration:

idle_pool_worker_timeout = 0 (default)
connection_proxies = 2
max_sessions = 10 (default)
max_connections = 1000

Initialized pgbench w/ scale 10 for the small server.

Running pgbench w/out connection pooler with 300 connections:

pgbench -p 5432 -c 300 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
progress: 15.0 s, 1343.3 tps, lat 123.097 ms stddev 380.780
progress: 30.0 s, 1086.7 tps, lat 155.586 ms stddev 376.963
progress: 45.1 s, 1103.8 tps, lat 156.644 ms stddev 347.058
progress: 60.6 s, 652.6 tps, lat 271.060 ms stddev 575.295
transaction type: <builtin: select only>
scaling factor: 10
query mode: simple
number of clients: 300
number of threads: 1
duration: 60 s
number of transactions actually processed: 63387
latency average = 171.079 ms
latency stddev = 439.735 ms
tps = 1000.918781 (including connections establishing)
tps = 1000.993926 (excluding connections establishing)

It crashes when I attempt to run with the connection pooler, 300
connections:

pgbench -p 6543 -c 300 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
connection to database "bench_test" failed:
server closed the connection unexpectedly
       This probably means the server terminated abnormally
       before or while processing the request.

In the logs I get:

WARNING:  PROXY: Failed to add new client - too much sessions: 18
clients, 1 backends. Try to increase 'max_sessions' configuration
parameter.

The logs report 1 backend even though max_sessions is the default of
10.  Why is there only 1 backend reported?  Is that error message
getting the right value?

Minor grammar fix, the logs on this warning should say "too many
sessions" instead of "too much sessions."

Reducing pgbench to only 30 connections keeps it from completely
crashing but it still does not run successfully.

pgbench -p 6543 -c 30 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
client 9 aborted in command 1 (SQL) of script 0; perhaps the backend
died while processing
client 11 aborted in command 1 (SQL) of script 0; perhaps the backend
died while processing
client 13 aborted in command 1 (SQL) of script 0; perhaps the backend
died while processing
...
...
progress: 15.0 s, 5734.5 tps, lat 1.191 ms stddev 10.041
progress: 30.0 s, 7789.6 tps, lat 0.830 ms stddev 6.251
progress: 45.0 s, 8211.3 tps, lat 0.810 ms stddev 5.970
progress: 60.0 s, 8466.5 tps, lat 0.789 ms stddev 6.151
transaction type: <builtin: select only>
scaling factor: 10
query mode: simple
number of clients: 30
number of threads: 1
duration: 60 s
number of transactions actually processed: 453042
latency average = 0.884 ms
latency stddev = 7.182 ms
tps = 7549.373416 (including connections establishing)
tps = 7549.402629 (excluding connections establishing)
Run was aborted; the above results are incomplete.

Logs for that run show (truncated):

2019-08-07 00:19:37.707 UTC [22152] WARNING:  PROXY: Failed to add new
client - too much sessions: 18 clients, 1 backends. Try to increase
'max_sessions' configuration parameter.
2019-08-07 00:31:10.467 UTC [22151] WARNING:  PROXY: Failed to add new
client - too much sessions: 15 clients, 4 backends. Try to increase
'max_sessions' configuration parameter.
2019-08-07 00:31:10.468 UTC [22152] WARNING:  PROXY: Failed to add new
client - too much sessions: 15 clients, 4 backends. Try to increase
'max_sessions' configuration parameter.
...
...

Here it is reporting fewer clients with more backends. Still, only 4
backends reported with 15 clients doesn't seem right.  Looking at the
results from pg_pooler_state() at the same time (below) showed 5 and 7
backends for the two different proxies, so why are the logs only
reporting 4 backends when pg_pooler_state() reports 12 total?

Why is n_idle_clients negative?  In this case it showed -21 and -17. 
Each proxy reported 7 clients, with max_sessions = 10, having those
n_idle_client results doesn't make sense to me.

postgres=# SELECT * FROM pg_pooler_state();
 pid  | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | n_idle_backends | n_idle_clients | tx_bytes |
rx_bytes | n_transactions

-------+-----------+---------------+---------+------------+----------------------+-----------------+----------------+----------+----------+---------------
-
25737 |         7 |             0 |       1 |          5 |            
     0 |               0 |            -21 |  4099541 |  3896792 |    
     61959
25738 |         7 |             0 |       1 |          7 |            
     0 |               2 |            -17 |  4530587 |  4307474 |    
     68490
(2 rows)

I get errors running pgbench down to only 20 connections with this
configuration. I tried adjusting connection_proxies = 1 and it handles
even fewer connections.  Setting connection_proxies = 4 allows it to
handle 20 connections without error, but by 40 connections it starts
having issues.

While I don't have expectations of this working great (or even decent)
on a tiny server, I don't expect it to crash in a case where the
standard connections work.  Also, the logs and the function both show
that the total backends is less than the total available and the two
don't seem to agree on the details.

I think it would help to have details about the pg_pooler_state
function added to the docs, maybe in this section [2]?

I'll take some time later this week to examine pg_pooler_state further
on a more appropriately sized server.

Thanks,

[1]
/messages/by-id/attachment/103046/builtin_connection_proxy-16.patch
[2] https://www.postgresql.org/docs/current/functions-info.html

Ryan Lambert

Sorry, looks like there is misunderstanding with meaning of max_sessions
parameters.
First of all default value of this parameter is 1000, not 10.
Looks like you have explicitly specify value 10 and it cause this problems.

So "max_sessions" parameter specifies how much sessions can be handled
by one backend.
Certainly it makes sense only if pooler is switched on (number of
proxies is not zero).
If pooler is switched off, than backend is handling exactly one session/

There is no much sense in limiting number of sessions server by one
backend, because the main goal of connection pooler is to handle arbitrary
number of client connections with limited number of backends.
The only reason for presence of this parameter is that WaitEventSet
requires to specify maximal number of events.
And proxy needs to multiplex connected backends and clients. So it
create WaitEventSet with size max_sessions*2 (mutiplied by two because
it has to listen both for clients and backends).

So the value of this parameter should be large enough. Default value is
1000, but there should be no problem to set it to 10000 or even 1000000
(hoping that IS will support it).

But observer behavior ("server closed the connection unexpectedly" and
hegative number of idle clients) is certainly not correct.
I attached to this mail patch which is fixing both problems: correctly
reports error to the client and calculates  number of idle clients).
New version also available in my GIT repoistory:
https://github.com/postgrespro/postgresql.builtin_pool.git
branch conn_proxy.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-17.patchtext/x-patch; name=builtin_connection_proxy-17.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..119daac 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,137 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..bc6547b
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..a76db8d
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+	char c_buffer[256];
+	char m_buffer[256];
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..0d1df3c 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..a8d7322
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1177 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool	 write_pending;		 /* write request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	bool	 read_pending;		 /* read request is pending: emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	else if (rc < 0)
+	{
+		/* do not accept more read events while write request is pending */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = true;
+	}
+	else if (chan->write_pending)
+	{
+		/* resume accepting read events */
+		ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+		chan->write_pending = false;
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			else
+			{
+				/* do not accept more write events while read request is pending */
+				ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+				chan->read_pending = true;
+			}
+			return false; /* wait for more data */
+		}
+		else if (chan->read_pending)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->read_pending = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					/* At systems not supporttring epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+					ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+					channel_write(chan, false);
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..c36e9a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +997,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1014,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..b128b9c 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..abac1cd 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2250,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2329,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4625,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4646,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4832,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4922,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5096,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5112,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5138,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5499,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5794,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5818,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6152,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6215,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6849,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6891,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6953,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +7000,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7575,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7781,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7838,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8119,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8177,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8146,6 +8231,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
@@ -8175,7 +8263,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8285,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8303,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8457,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8466,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8548,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8576,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8654,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9367,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9521,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9749,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9984,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10040,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10056,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10412,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10471,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10551,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10692,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10715,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10774,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11210,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11223,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11261,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11291,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11579,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..7a93bf4 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,20 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#43Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#42)
Re: Built-in connection pooler

First of all default value of this parameter is 1000, not 10.

Oops, my bad! Sorry about that, I'm not sure how I got that in my head
last night but I see how that would make it act strange now. I'll adjust
my notes before re-testing. :)

Thanks,

*Ryan Lambert*

On Wed, Aug 7, 2019 at 4:57 AM Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Show quoted text

Hi Ryan,

On 07.08.2019 6:18, Ryan Lambert wrote:

Hi Konstantin,

I did some testing with the latest patch [1] on a small local VM with
1 CPU and 2GB RAM with the intention of exploring pg_pooler_state().

Configuration:

idle_pool_worker_timeout = 0 (default)
connection_proxies = 2
max_sessions = 10 (default)
max_connections = 1000

Initialized pgbench w/ scale 10 for the small server.

Running pgbench w/out connection pooler with 300 connections:

pgbench -p 5432 -c 300 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
progress: 15.0 s, 1343.3 tps, lat 123.097 ms stddev 380.780
progress: 30.0 s, 1086.7 tps, lat 155.586 ms stddev 376.963
progress: 45.1 s, 1103.8 tps, lat 156.644 ms stddev 347.058
progress: 60.6 s, 652.6 tps, lat 271.060 ms stddev 575.295
transaction type: <builtin: select only>
scaling factor: 10
query mode: simple
number of clients: 300
number of threads: 1
duration: 60 s
number of transactions actually processed: 63387
latency average = 171.079 ms
latency stddev = 439.735 ms
tps = 1000.918781 (including connections establishing)
tps = 1000.993926 (excluding connections establishing)

It crashes when I attempt to run with the connection pooler, 300
connections:

pgbench -p 6543 -c 300 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
connection to database "bench_test" failed:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

In the logs I get:

WARNING: PROXY: Failed to add new client - too much sessions: 18
clients, 1 backends. Try to increase 'max_sessions' configuration
parameter.

The logs report 1 backend even though max_sessions is the default of
10. Why is there only 1 backend reported? Is that error message
getting the right value?

Minor grammar fix, the logs on this warning should say "too many
sessions" instead of "too much sessions."

Reducing pgbench to only 30 connections keeps it from completely
crashing but it still does not run successfully.

pgbench -p 6543 -c 30 -j 1 -T 60 -P 15 -S bench_test
starting vacuum...end.
client 9 aborted in command 1 (SQL) of script 0; perhaps the backend
died while processing
client 11 aborted in command 1 (SQL) of script 0; perhaps the backend
died while processing
client 13 aborted in command 1 (SQL) of script 0; perhaps the backend
died while processing
...
...
progress: 15.0 s, 5734.5 tps, lat 1.191 ms stddev 10.041
progress: 30.0 s, 7789.6 tps, lat 0.830 ms stddev 6.251
progress: 45.0 s, 8211.3 tps, lat 0.810 ms stddev 5.970
progress: 60.0 s, 8466.5 tps, lat 0.789 ms stddev 6.151
transaction type: <builtin: select only>
scaling factor: 10
query mode: simple
number of clients: 30
number of threads: 1
duration: 60 s
number of transactions actually processed: 453042
latency average = 0.884 ms
latency stddev = 7.182 ms
tps = 7549.373416 (including connections establishing)
tps = 7549.402629 (excluding connections establishing)
Run was aborted; the above results are incomplete.

Logs for that run show (truncated):

2019-08-07 00:19:37.707 UTC [22152] WARNING: PROXY: Failed to add new
client - too much sessions: 18 clients, 1 backends. Try to increase
'max_sessions' configuration parameter.
2019-08-07 00:31:10.467 UTC [22151] WARNING: PROXY: Failed to add new
client - too much sessions: 15 clients, 4 backends. Try to increase
'max_sessions' configuration parameter.
2019-08-07 00:31:10.468 UTC [22152] WARNING: PROXY: Failed to add new
client - too much sessions: 15 clients, 4 backends. Try to increase
'max_sessions' configuration parameter.
...
...

Here it is reporting fewer clients with more backends. Still, only 4
backends reported with 15 clients doesn't seem right. Looking at the
results from pg_pooler_state() at the same time (below) showed 5 and 7
backends for the two different proxies, so why are the logs only
reporting 4 backends when pg_pooler_state() reports 12 total?

Why is n_idle_clients negative? In this case it showed -21 and -17.
Each proxy reported 7 clients, with max_sessions = 10, having those
n_idle_client results doesn't make sense to me.

postgres=# SELECT * FROM pg_pooler_state();
pid | n_clients | n_ssl_clients | n_pools | n_backends |
n_dedicated_backends | n_idle_backends | n_idle_clients | tx_bytes |
rx_bytes | n_transactions

-------+-----------+---------------+---------+------------+----------------------+-----------------+----------------+----------+----------+---------------

-
25737 | 7 | 0 | 1 | 5 |
0 | 0 | -21 | 4099541 | 3896792 |
61959
25738 | 7 | 0 | 1 | 7 |
0 | 2 | -17 | 4530587 | 4307474 |
68490
(2 rows)

I get errors running pgbench down to only 20 connections with this
configuration. I tried adjusting connection_proxies = 1 and it handles
even fewer connections. Setting connection_proxies = 4 allows it to
handle 20 connections without error, but by 40 connections it starts
having issues.

While I don't have expectations of this working great (or even decent)
on a tiny server, I don't expect it to crash in a case where the
standard connections work. Also, the logs and the function both show
that the total backends is less than the total available and the two
don't seem to agree on the details.

I think it would help to have details about the pg_pooler_state
function added to the docs, maybe in this section [2]?

I'll take some time later this week to examine pg_pooler_state further
on a more appropriately sized server.

Thanks,

[1]

/messages/by-id/attachment/103046/builtin_connection_proxy-16.patch

[2] https://www.postgresql.org/docs/current/functions-info.html

Ryan Lambert

Sorry, looks like there is misunderstanding with meaning of max_sessions
parameters.
First of all default value of this parameter is 1000, not 10.
Looks like you have explicitly specify value 10 and it cause this problems.

So "max_sessions" parameter specifies how much sessions can be handled
by one backend.
Certainly it makes sense only if pooler is switched on (number of
proxies is not zero).
If pooler is switched off, than backend is handling exactly one session/

There is no much sense in limiting number of sessions server by one
backend, because the main goal of connection pooler is to handle arbitrary
number of client connections with limited number of backends.
The only reason for presence of this parameter is that WaitEventSet
requires to specify maximal number of events.
And proxy needs to multiplex connected backends and clients. So it
create WaitEventSet with size max_sessions*2 (mutiplied by two because
it has to listen both for clients and backends).

So the value of this parameter should be large enough. Default value is
1000, but there should be no problem to set it to 10000 or even 1000000
(hoping that IS will support it).

But observer behavior ("server closed the connection unexpectedly" and
hegative number of idle clients) is certainly not correct.
I attached to this mail patch which is fixing both problems: correctly
reports error to the client and calculates number of idle clients).
New version also available in my GIT repoistory:
https://github.com/postgrespro/postgresql.builtin_pool.git
branch conn_proxy.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#44Ryan Lambert
ryan@rustprooflabs.com
In reply to: Konstantin Knizhnik (#42)
Re: Built-in connection pooler

I attached to this mail patch which is fixing both problems: correctly
reports error to the client and calculates number of idle clients).

Yes, this works much better with max_sessions=1000. Now it's handling the
300 connections on the small server. n_idle_clients now looks accurate
with the rest of the stats here.

postgres=# SELECT n_clients, n_backends, n_idle_backends, n_idle_clients
FROM pg_pooler_state();
n_clients | n_backends | n_idle_backends | n_idle_clients
-----------+------------+-----------------+----------------
150 | 10 | 9 | 149
150 | 10 | 6 | 146

Ryan Lambert

Show quoted text
#45Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Ryan Lambert (#44)
1 attachment(s)
Re: Built-in connection pooler

Updated version of the patch is attached.
I rewrote  edge-triggered mode emulation and have tested it at MacOS/X.
So right now three major platforms: Linux, MaxOS and Windows are covered.
In theory it should work on most of other Unix dialects.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-18.patchtext/x-patch; name=builtin_connection_proxy-18.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..119daac 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,137 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..bc6547b
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,174 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of configuration variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..0d1df3c 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5059,7 +5244,6 @@ ExitPostmaster(int status)
 				 errmsg_internal("postmaster became multithreaded"),
 				 errdetail("Please report this to <pgsql-bugs@lists.postgresql.org>.")));
 #endif
-
 	/* should cleanup shared memory and kill all backends */
 
 	/*
@@ -5526,6 +5710,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6368,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6603,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..5f19ad6
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1174 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext memctx;		 /* Memory context for this proxy (used only in single thread) */
+	MemoryContext tmpctx;		 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in tmpctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->tmpctx);
+	MemoryContextSwitchTo(chan->proxy->tmpctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->tmpctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = realloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port /* Message from backend */
+					&& chan->buf[msg_start] == 'Z'	/* Ready for query */
+					&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+				{
+					Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+					chan->backend_is_ready = true; /* Backend is ready for query */
+					chan->proxy->state->n_transactions += 1;
+				}
+				else if (chan->client_port /* Message from client */
+						 && chan->buf[msg_start] == 'X')	/* Terminate message */
+				{
+					chan->is_interrupted = true;
+					if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+					{
+						/* Skip terminate message to idle and non-tainted backends */
+						channel_hangout(chan, "terminate");
+						return false;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)calloc(1, sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = malloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = malloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		free(chan->buf);
+		free(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		free(port->gss);
+#endif
+		free(port);
+		free(chan->buf);
+		free(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		free(chan->client_port);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		free(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	free(chan->buf);
+	free(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy = calloc(1, sizeof(Proxy));
+	proxy->memctx = AllocSetContextCreate(TopMemoryContext,
+										  "Proxy",
+										  ALLOCSET_DEFAULT_SIZES);
+	proxy->tmpctx = AllocSetContextCreate(proxy->memctx,
+										  "Startup packet parsing context",
+										  ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy->memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)calloc(1, sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					free(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->peer == NULL || chan->peer->tx_size == 0) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..c36e9a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -912,7 +997,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -929,8 +1014,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..b128b9c 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 1;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,4 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..abac1cd 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -550,7 +558,7 @@ int			huge_pages;
 
 /*
  * These variables are all dummies that don't do anything, except in some
- * cases provide the value for SHOW to display.  The real state is elsewhere
+ * cases provide the value for SHOW to display.	 The real state is elsewhere
  * and is kept in sync by assign_hooks.
  */
 static char *syslog_ident_str;
@@ -1166,7 +1174,7 @@ static struct config_bool ConfigureNamesBool[] =
 			gettext_noop("Writes full pages to WAL when first modified after a checkpoint."),
 			gettext_noop("A page write in process during an operating system crash might be "
 						 "only partially written to disk.  During recovery, the row changes "
-						 "stored in WAL are not enough to recover.  This option writes "
+						 "stored in WAL are not enough to recover.	This option writes "
 						 "pages when first modified after a checkpoint to WAL so full recovery "
 						 "is possible.")
 		},
@@ -1286,6 +1294,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2156,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2250,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -2254,7 +2329,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	/*
 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
-	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
+	 * default for max_stack_depth.	 InitializeGUCOptions will increase it if
 	 * possible, depending on the actual platform-specific stack limit.
 	 */
 	{
@@ -4550,6 +4625,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -4561,7 +4646,7 @@ static struct config_enum ConfigureNamesEnum[] =
 
 /*
  * To allow continued support of obsolete names for GUC variables, we apply
- * the following mappings to any unrecognized name.  Note that an old name
+ * the following mappings to any unrecognized name.	 Note that an old name
  * should be mapped to a new one only if the new variable has very similar
  * semantics to the old.
  */
@@ -4747,7 +4832,7 @@ extra_field_used(struct config_generic *gconf, void *extra)
 }
 
 /*
- * Support for assigning to an "extra" field of a GUC item.  Free the prior
+ * Support for assigning to an "extra" field of a GUC item.	 Free the prior
  * value if it's not referenced anywhere else in the item (including stacked
  * states).
  */
@@ -4837,7 +4922,7 @@ get_guc_variables(void)
 
 
 /*
- * Build the sorted array.  This is split out so that it could be
+ * Build the sorted array.	This is split out so that it could be
  * re-executed after startup (e.g., we could allow loadable modules to
  * add vars, and then we'd need to re-sort).
  */
@@ -5011,7 +5096,7 @@ add_placeholder_variable(const char *name, int elevel)
 
 	/*
 	 * The char* is allocated at the end of the struct since we have no
-	 * 'static' place to point to.  Note that the current value, as well as
+	 * 'static' place to point to.	Note that the current value, as well as
 	 * the boot and reset values, start out NULL.
 	 */
 	var->variable = (char **) (var + 1);
@@ -5027,7 +5112,7 @@ add_placeholder_variable(const char *name, int elevel)
 }
 
 /*
- * Look up option NAME.  If it exists, return a pointer to its record,
+ * Look up option NAME.	 If it exists, return a pointer to its record,
  * else return NULL.  If create_placeholders is true, we'll create a
  * placeholder record for a valid-looking custom variable name.
  */
@@ -5053,7 +5138,7 @@ find_option(const char *name, bool create_placeholders, int elevel)
 		return *res;
 
 	/*
-	 * See if the name is an obsolete name for a variable.  We assume that the
+	 * See if the name is an obsolete name for a variable.	We assume that the
 	 * set of supported old names is short enough that a brute-force search is
 	 * the best way.
 	 */
@@ -5414,7 +5499,7 @@ SelectConfigFiles(const char *userDoption, const char *progname)
 	}
 
 	/*
-	 * Read the configuration file for the first time.  This time only the
+	 * Read the configuration file for the first time.	This time only the
 	 * data_directory parameter is picked up to determine the data directory,
 	 * so that we can read the PG_AUTOCONF_FILENAME file next time.
 	 */
@@ -5709,7 +5794,7 @@ AtStart_GUC(void)
 {
 	/*
 	 * The nest level should be 0 between transactions; if it isn't, somebody
-	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.  We
+	 * didn't call AtEOXact_GUC, or called it with the wrong nestLevel.	 We
 	 * throw a warning but make no other effort to clean up.
 	 */
 	if (GUCNestLevel != 0)
@@ -5733,10 +5818,10 @@ NewGUCNestLevel(void)
 /*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
- * transient assignment to some GUC variables.  (The name is thus a bit of
+ * transient assignment to some GUC variables.	(The name is thus a bit of
  * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
  * During abort, we discard all GUC settings that were applied at nesting
- * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
+ * levels >= nestLevel.	 nestLevel == 1 corresponds to the main transaction.
  */
 void
 AtEOXact_GUC(bool isCommit, int nestLevel)
@@ -6067,7 +6152,7 @@ ReportGUCOption(struct config_generic *record)
 
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
- * to the given base unit.  'value' and 'unit' are the input value and unit
+ * to the given base unit.	'value' and 'unit' are the input value and unit
  * to convert from (there can be trailing spaces in the unit string).
  * The converted value is stored in *base_value.
  * It's caller's responsibility to round off the converted value as necessary
@@ -6130,7 +6215,7 @@ convert_to_base_unit(double value, const char *unit,
  * Convert an integer value in some base unit to a human-friendly unit.
  *
  * The output unit is chosen so that it's the greatest unit that can represent
- * the value without loss.  For example, if the base unit is GUC_UNIT_KB, 1024
+ * the value without loss.	For example, if the base unit is GUC_UNIT_KB, 1024
  * is converted to 1 MB, but 1025 is represented as 1025 kB.
  */
 static void
@@ -6764,7 +6849,7 @@ set_config_option(const char *name, const char *value,
 
 	/*
 	 * GUC_ACTION_SAVE changes are acceptable during a parallel operation,
-	 * because the current worker will also pop the change.  We're probably
+	 * because the current worker will also pop the change.	 We're probably
 	 * dealing with a function having a proconfig entry.  Only the function's
 	 * body should observe the change, and peer workers do not share in the
 	 * execution of a function call started by this worker.
@@ -6806,7 +6891,7 @@ set_config_option(const char *name, const char *value,
 			{
 				/*
 				 * We are re-reading a PGC_POSTMASTER variable from
-				 * postgresql.conf.  We can't change the setting, so we should
+				 * postgresql.conf.	 We can't change the setting, so we should
 				 * give a warning if the DBA tries to change it.  However,
 				 * because of variant formats, canonicalization by check
 				 * hooks, etc, we can't just compare the given string directly
@@ -6868,7 +6953,7 @@ set_config_option(const char *name, const char *value,
 				 * non-default settings from the CONFIG_EXEC_PARAMS file
 				 * during backend start.  In that case we must accept
 				 * PGC_SIGHUP settings, so as to have the same value as if
-				 * we'd forked from the postmaster.  This can also happen when
+				 * we'd forked from the postmaster.	 This can also happen when
 				 * using RestoreGUCState() within a background worker that
 				 * needs to have the same settings as the user backend that
 				 * started it. is_reload will be true when either situation
@@ -6915,9 +7000,9 @@ set_config_option(const char *name, const char *value,
 	 * An exception might be made if the reset value is assumed to be "safe".
 	 *
 	 * Note: this flag is currently used for "session_authorization" and
-	 * "role".  We need to prohibit changing these inside a local userid
+	 * "role".	We need to prohibit changing these inside a local userid
 	 * context because when we exit it, GUC won't be notified, leaving things
-	 * out of sync.  (This could be fixed by forcing a new GUC nesting level,
+	 * out of sync.	 (This could be fixed by forcing a new GUC nesting level,
 	 * but that would change behavior in possibly-undesirable ways.)  Also, we
 	 * prohibit changing these in a security-restricted operation because
 	 * otherwise RESET could be used to regain the session user's privileges.
@@ -7490,7 +7575,7 @@ set_config_sourcefile(const char *name, char *sourcefile, int sourceline)
  * Set a config option to the given value.
  *
  * See also set_config_option; this is just the wrapper to be called from
- * outside GUC.  (This function should be used when possible, because its API
+ * outside GUC.	 (This function should be used when possible, because its API
  * is more stable than set_config_option's.)
  *
  * Note: there is no support here for setting source file/line, as it
@@ -7696,7 +7781,7 @@ flatten_set_variable_args(const char *name, List *args)
 		Node	   *arg = (Node *) lfirst(l);
 		char	   *val;
 		TypeName   *typeName = NULL;
-		A_Const    *con;
+		A_Const	   *con;
 
 		if (l != list_head(args))
 			appendStringInfoString(&buf, ", ");
@@ -7753,7 +7838,7 @@ flatten_set_variable_args(const char *name, List *args)
 				else
 				{
 					/*
-					 * Plain string literal or identifier.  For quote mode,
+					 * Plain string literal or identifier.	For quote mode,
 					 * quote it if it's not a vanilla identifier.
 					 */
 					if (flags & GUC_LIST_QUOTE)
@@ -8034,7 +8119,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 
 	/*
 	 * Only one backend is allowed to operate on PG_AUTOCONF_FILENAME at a
-	 * time.  Use AutoFileLock to ensure that.  We must hold the lock while
+	 * time.  Use AutoFileLock to ensure that.	We must hold the lock while
 	 * reading the old file contents.
 	 */
 	LWLockAcquire(AutoFileLock, LW_EXCLUSIVE);
@@ -8092,7 +8177,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)
 						AutoConfTmpFileName)));
 
 	/*
-	 * Use a TRY block to clean up the file if we fail.  Since we need a TRY
+	 * Use a TRY block to clean up the file if we fail.	 Since we need a TRY
 	 * block anyway, OK to use BasicOpenFile rather than OpenTransientFile.
 	 */
 	PG_TRY();
@@ -8146,6 +8231,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
@@ -8175,7 +8263,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("transaction_isolation",
@@ -8197,7 +8285,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 
 				foreach(head, stmt->args)
 				{
-					DefElem    *item = (DefElem *) lfirst(head);
+					DefElem	   *item = (DefElem *) lfirst(head);
 
 					if (strcmp(item->defname, "transaction_isolation") == 0)
 						SetPGVariable("default_transaction_isolation",
@@ -8215,7 +8303,7 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 			}
 			else if (strcmp(stmt->name, "TRANSACTION SNAPSHOT") == 0)
 			{
-				A_Const    *con = linitial_node(A_Const, stmt->args);
+				A_Const	   *con = linitial_node(A_Const, stmt->args);
 
 				if (stmt->is_local)
 					ereport(ERROR,
@@ -8369,7 +8457,7 @@ init_custom_variable(const char *name,
 	/*
 	 * We can't support custom GUC_LIST_QUOTE variables, because the wrong
 	 * things would happen if such a variable were set or pg_dump'd when the
-	 * defining extension isn't loaded.  Again, treat this as fatal because
+	 * defining extension isn't loaded.	 Again, treat this as fatal because
 	 * the loadable module may be partly initialized already.
 	 */
 	if (flags & GUC_LIST_QUOTE)
@@ -8378,7 +8466,7 @@ init_custom_variable(const char *name,
 	/*
 	 * Before pljava commit 398f3b876ed402bdaec8bc804f29e2be95c75139
 	 * (2015-12-15), two of that module's PGC_USERSET variables facilitated
-	 * trivial escalation to superuser privileges.  Restrict the variables to
+	 * trivial escalation to superuser privileges.	Restrict the variables to
 	 * protect sites that have yet to upgrade pljava.
 	 */
 	if (context == PGC_USERSET &&
@@ -8460,9 +8548,9 @@ define_custom_variable(struct config_generic *variable)
 	 * variable.  Essentially, we need to duplicate all the active and stacked
 	 * values, but with appropriate validation and datatype adjustment.
 	 *
-	 * If an assignment fails, we report a WARNING and keep going.  We don't
+	 * If an assignment fails, we report a WARNING and keep going.	We don't
 	 * want to throw ERROR for bad values, because it'd bollix the add-on
-	 * module that's presumably halfway through getting loaded.  In such cases
+	 * module that's presumably halfway through getting loaded.	 In such cases
 	 * the default or previous state will become active instead.
 	 */
 
@@ -8488,7 +8576,7 @@ define_custom_variable(struct config_generic *variable)
 	/*
 	 * Free up as much as we conveniently can of the placeholder structure.
 	 * (This neglects any stack items, so it's possible for some memory to be
-	 * leaked.  Since this can only happen once per session per variable, it
+	 * leaked.	Since this can only happen once per session per variable, it
 	 * doesn't seem worth spending much code on.)
 	 */
 	set_string_field(pHolder, pHolder->variable, NULL);
@@ -8566,9 +8654,9 @@ reapply_stacked_values(struct config_generic *variable,
 	else
 	{
 		/*
-		 * We are at the end of the stack.  If the active/previous value is
+		 * We are at the end of the stack.	If the active/previous value is
 		 * different from the reset value, it must represent a previously
-		 * committed session value.  Apply it, and then drop the stack entry
+		 * committed session value.	 Apply it, and then drop the stack entry
 		 * that set_config_option will have created under the impression that
 		 * this is to be just a transactional assignment.  (We leak the stack
 		 * entry.)
@@ -9279,7 +9367,7 @@ show_config_by_name(PG_FUNCTION_ARGS)
 
 /*
  * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as
- * a function.  If X does not exist, suppress the error and just return NULL
+ * a function.	If X does not exist, suppress the error and just return NULL
  * if missing_ok is true.
  */
 Datum
@@ -9433,7 +9521,7 @@ show_all_settings(PG_FUNCTION_ARGS)
  * which includes the config file pathname, the line number, a sequence number
  * indicating the order in which the settings were encountered, the parameter
  * name and value, a bool showing if the value could be applied, and possibly
- * an associated error message.  (For problems such as syntax errors, the
+ * an associated error message.	 (For problems such as syntax errors, the
  * parameter name/value might be NULL.)
  *
  * Note: no filtering is done here, instead we depend on the GRANT system
@@ -9661,7 +9749,7 @@ _ShowOption(struct config_generic *record, bool use_units)
 
 /*
  *	These routines dump out all non-default GUC options into a binary
- *	file that is read by all exec'ed backends.  The format is:
+ *	file that is read by all exec'ed backends.	The format is:
  *
  *		variable name, string, null terminated
  *		variable value, string, null terminated
@@ -9896,14 +9984,14 @@ read_nondefault_variables(void)
  *
  * A PGC_S_DEFAULT setting on the serialize side will typically match new
  * postmaster children, but that can be false when got_SIGHUP == true and the
- * pending configuration change modifies this setting.  Nonetheless, we omit
+ * pending configuration change modifies this setting.	Nonetheless, we omit
  * PGC_S_DEFAULT settings from serialization and make up for that by restoring
  * defaults before applying serialized values.
  *
  * PGC_POSTMASTER variables always have the same value in every child of a
  * particular postmaster.  Most PGC_INTERNAL variables are compile-time
  * constants; a few, like server_encoding and lc_ctype, are handled specially
- * outside the serialize/restore procedure.  Therefore, SerializeGUCState()
+ * outside the serialize/restore procedure.	 Therefore, SerializeGUCState()
  * never sends these, and RestoreGUCState() never changes them.
  *
  * Role is a special variable in the sense that its current value can be an
@@ -9952,7 +10040,7 @@ estimate_variable_size(struct config_generic *gconf)
 
 				/*
 				 * Instead of getting the exact display length, use max
-				 * length.  Also reduce the max length for typical ranges of
+				 * length.	Also reduce the max length for typical ranges of
 				 * small values.  Maximum value is 2147483647, i.e. 10 chars.
 				 * Include one byte for sign.
 				 */
@@ -9968,7 +10056,7 @@ estimate_variable_size(struct config_generic *gconf)
 				/*
 				 * We are going to print it with %e with REALTYPE_PRECISION
 				 * fractional digits.  Account for sign, leading digit,
-				 * decimal point, and exponent with up to 3 digits.  E.g.
+				 * decimal point, and exponent with up to 3 digits.	 E.g.
 				 * -3.99329042340000021e+110
 				 */
 				valsize = 1 + 1 + 1 + REALTYPE_PRECISION + 5;
@@ -10324,7 +10412,7 @@ ParseLongOption(const char *string, char **name, char **value)
 
 /*
  * Handle options fetched from pg_db_role_setting.setconfig,
- * pg_proc.proconfig, etc.  Caller must specify proper context/source/action.
+ * pg_proc.proconfig, etc.	Caller must specify proper context/source/action.
  *
  * The array parameter must be an array of TEXT (it must not be NULL).
  */
@@ -10383,7 +10471,7 @@ ProcessGUCArray(ArrayType *array,
 
 
 /*
- * Add an entry to an option array.  The array parameter may be NULL
+ * Add an entry to an option array.	 The array parameter may be NULL
  * to indicate the current table entry is NULL.
  */
 ArrayType *
@@ -10463,7 +10551,7 @@ GUCArrayAdd(ArrayType *array, const char *name, const char *value)
 
 /*
  * Delete an entry from an option array.  The array parameter may be NULL
- * to indicate the current table entry is NULL.  Also, if the return value
+ * to indicate the current table entry is NULL.	 Also, if the return value
  * is NULL then a null should be stored.
  */
 ArrayType *
@@ -10604,8 +10692,8 @@ GUCArrayReset(ArrayType *array)
 /*
  * Validate a proposed option setting for GUCArrayAdd/Delete/Reset.
  *
- * name is the option name.  value is the proposed value for the Add case,
- * or NULL for the Delete/Reset cases.  If skipIfNoPermissions is true, it's
+ * name is the option name.	 value is the proposed value for the Add case,
+ * or NULL for the Delete/Reset cases.	If skipIfNoPermissions is true, it's
  * not an error to have no permissions to set the option.
  *
  * Returns true if OK, false if skipIfNoPermissions is true and user does not
@@ -10627,13 +10715,13 @@ validate_option_array_item(const char *name, const char *value,
 	 * SUSET and user is superuser).
 	 *
 	 * name is not known, but exists or can be created as a placeholder (i.e.,
-	 * it has a prefixed name).  We allow this case if you're a superuser,
+	 * it has a prefixed name).	 We allow this case if you're a superuser,
 	 * otherwise not.  Superusers are assumed to know what they're doing. We
 	 * can't allow it for other users, because when the placeholder is
 	 * resolved it might turn out to be a SUSET variable;
 	 * define_custom_variable assumes we checked that.
 	 *
-	 * name is not known and can't be created as a placeholder.  Throw error,
+	 * name is not known and can't be created as a placeholder.	 Throw error,
 	 * unless skipIfNoPermissions is true, in which case return false.
 	 */
 	gconf = find_option(name, true, WARNING);
@@ -10686,7 +10774,7 @@ validate_option_array_item(const char *name, const char *value,
  * ERRCODE_INVALID_PARAMETER_VALUE SQLSTATE for check hook failures.
  *
  * Note that GUC_check_errmsg() etc are just macros that result in a direct
- * assignment to the associated variables.  That is ugly, but forced by the
+ * assignment to the associated variables.	That is ugly, but forced by the
  * limitations of C's macro mechanisms.
  */
 void
@@ -11122,7 +11210,7 @@ check_canonical_path(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * Since canonicalize_path never enlarges the string, we can just modify
-	 * newval in-place.  But watch out for NULL, which is the default value
+	 * newval in-place.	 But watch out for NULL, which is the default value
 	 * for external_pid_file.
 	 */
 	if (*newval)
@@ -11135,7 +11223,7 @@ check_timezone_abbreviations(char **newval, void **extra, GucSource source)
 {
 	/*
 	 * The boot_val given above for timezone_abbreviations is NULL. When we
-	 * see this we just do nothing.  If this value isn't overridden from the
+	 * see this we just do nothing.	 If this value isn't overridden from the
 	 * config file then pg_timezone_abbrev_initialize() will eventually
 	 * replace it with "Default".  This hack has two purposes: to avoid
 	 * wasting cycles loading values that might soon be overridden from the
@@ -11173,7 +11261,7 @@ assign_timezone_abbreviations(const char *newval, void *extra)
 /*
  * pg_timezone_abbrev_initialize --- set default value if not done already
  *
- * This is called after initial loading of postgresql.conf.  If no
+ * This is called after initial loading of postgresql.conf.	 If no
  * timezone_abbreviations setting was found therein, select default.
  * If a non-default value is already installed, nothing will happen.
  *
@@ -11203,7 +11291,7 @@ assign_tcp_keepalives_idle(int newval, void *extra)
 	 * The kernel API provides no way to test a value without setting it; and
 	 * once we set it we might fail to unset it.  So there seems little point
 	 * in fully implementing the check-then-assign GUC API for these
-	 * variables.  Instead we just do the assignment on demand.  pqcomm.c
+	 * variables.  Instead we just do the assignment on demand.	 pqcomm.c
 	 * reports any problems via elog(LOG).
 	 *
 	 * This approach means that the GUC value might have little to do with the
@@ -11491,11 +11579,11 @@ assign_recovery_target_timeline(const char *newval, void *extra)
 
 /*
  * Recovery target settings: Only one of the several recovery_target* settings
- * may be set.  Setting a second one results in an error.  The global variable
- * recoveryTarget tracks which kind of recovery target was chosen.  Other
+ * may be set.	Setting a second one results in an error.  The global variable
+ * recoveryTarget tracks which kind of recovery target was chosen.	Other
  * variables store the actual target value (for example a string or a xid).
  * The assign functions of the parameters check whether a competing parameter
- * was already set.  But we want to allow setting the same parameter multiple
+ * was already set.	 But we want to allow setting the same parameter multiple
  * times.  We also want to allow unsetting a parameter and setting a different
  * one, so we unset recoveryTarget when the parameter is set to an empty
  * string.
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index b07be12..dac74a2 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -506,7 +506,7 @@ MemoryContextStatsDetail(MemoryContext context, int max_children)
  * *totals (if given).
  */
 static void
-MemoryContextStatsInternal(MemoryContext context, int level,
+ MemoryContextStatsInternal(MemoryContext context, int level,
 						   bool print, int max_children,
 						   MemoryContextCounters *totals)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..7a93bf4 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,20 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#46Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#36)
1 attachment(s)
Re: Built-in connection pooler

On 30.07.2019 16:12, Tomas Vondra wrote:

On Tue, Jul 30, 2019 at 01:01:48PM +0300, Konstantin Knizhnik wrote:

On 30.07.2019 4:02, Tomas Vondra wrote:

My idea (sorry if it wasn't too clear) was that we might handle some
cases more gracefully.

For example, if we only switch between transactions, we don't quite
care
about 'SET LOCAL' (but the current patch does set the tainted flag).
The
same thing applies to GUCs set for a function.
For prepared statements, we might count the number of statements we
prepared and deallocated, and treat it as 'not tained' when there
are no
statements. Maybe there's some risk I can't think of.

The same thing applies to temporary tables - if you create and drop a
temporary table, is there a reason to still treat the session as
tained?

I have implemented one more trick reducing number of tainted backends:
now it is possible to use session variables in pooled backends.

How it works?
Proxy determines "SET var=" statements and  converts them to "SET LOCAL
var=".
Also all such assignments are concatenated and stored in session context
at proxy.
Then proxy injects this statement inside each transaction block or
prepend to standalone statements.

This mechanism works only for GUCs set outside transaction.
By default it is switched off. To enable it you should switch on
"proxying_gucs" parameter.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-19.patchtext/x-patch; name=builtin_connection_proxy-19.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..7aaddfe 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,153 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>rproxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..899fd1c
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,175 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..c28cefd
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1263 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						chan->is_interrupted = true;
+						if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if (ProxyingGUCs && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (pg_strncasecmp(stmt, "set", 3) == 0
+							&& pg_strncasecmp(stmt+3, " local", 6) != 0)
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, stmt+3,
+												  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs)
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->peer == NULL || chan->peer->tx_size == 0) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..aab2976 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,5 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..dc2e5f9 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,26 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2166,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2260,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4635,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8241,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..a8e57f4 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,21 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#47Jaime Casanova
jaime.casanova@2ndquadrant.com
In reply to: Konstantin Knizhnik (#46)
3 attachment(s)
Re: Built-in connection pooler

On Thu, 15 Aug 2019 at 06:01, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

I have implemented one more trick reducing number of tainted backends:
now it is possible to use session variables in pooled backends.

How it works?
Proxy determines "SET var=" statements and converts them to "SET LOCAL
var=".
Also all such assignments are concatenated and stored in session context
at proxy.
Then proxy injects this statement inside each transaction block or
prepend to standalone statements.

This mechanism works only for GUCs set outside transaction.
By default it is switched off. To enable it you should switch on
"proxying_gucs" parameter.

there is definitively something odd here. i applied the patch and
changed these parameters

connection_proxies = '3'
session_pool_size = '33'
port = '5433'
proxy_port = '5432'

after this i run "make installcheck", the idea is to prove if an
application going through proxy will behave sanely. As far as i
understood in case the backend needs session mode it will taint the
backend otherwise it will act as transaction mode.

Sadly i got a lot of FAILED tests, i'm attaching the diffs on
regression with installcheck and installcheck-parallel.
btw, after make installcheck-parallel i wanted to do a new test but
wasn't able to drop regression database because there is still a
subscription, so i tried to drop it and got a core file (i was
connected trough the pool_worker), i'm attaching the backtrace of the
crash too.

--
Jaime Casanova www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

gdb-bt-core-drop-subscription.txttext/plain; charset=US-ASCII; name=gdb-bt-core-drop-subscription.txtDownload
regression-parallel.diffsapplication/octet-stream; name=regression-parallel.diffsDownload
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/text.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/text.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/text.out	2019-07-12 13:20:36.241287721 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/text.out	2019-09-05 16:27:37.483450179 -0500
@@ -63,7 +63,7 @@
 select concat(1,2,3,'hello',true, false, to_date('20100309','YYYYMMDD'));
         concat        
 ----------------------
- 123hellotf03-09-2010
+ 123hellotf2010-03-09
 (1 row)
 
 select concat_ws('#','one');
@@ -75,7 +75,7 @@
 select concat_ws('#',1,2,3,'hello',true, false, to_date('20100309','YYYYMMDD'));
          concat_ws          
 ----------------------------
- 1#2#3#hello#t#f#03-09-2010
+ 1#2#3#hello#t#f#2010-03-09
 (1 row)
 
 select concat_ws(',',10,20,null,30);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rangetypes.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rangetypes.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rangetypes.out	2019-08-12 14:55:15.923121444 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rangetypes.out	2019-09-05 16:27:38.823564246 -0500
@@ -619,25 +619,25 @@
 select daterange('2000-01-10'::date, '2000-01-20'::date, '[]');
         daterange        
 -------------------------
- [01-10-2000,01-21-2000)
+ [2000-01-10,2000-01-21)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-20'::date, '[)');
         daterange        
 -------------------------
- [01-10-2000,01-20-2000)
+ [2000-01-10,2000-01-20)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-20'::date, '(]');
         daterange        
 -------------------------
- [01-11-2000,01-21-2000)
+ [2000-01-11,2000-01-21)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-20'::date, '()');
         daterange        
 -------------------------
- [01-11-2000,01-20-2000)
+ [2000-01-11,2000-01-20)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-11'::date, '()');
@@ -649,31 +649,31 @@
 select daterange('2000-01-10'::date, '2000-01-11'::date, '(]');
         daterange        
 -------------------------
- [01-11-2000,01-12-2000)
+ [2000-01-11,2000-01-12)
 (1 row)
 
 select daterange('-infinity'::date, '2000-01-01'::date, '()');
        daterange        
 ------------------------
- (-infinity,01-01-2000)
+ (-infinity,2000-01-01)
 (1 row)
 
 select daterange('-infinity'::date, '2000-01-01'::date, '[)');
        daterange        
 ------------------------
- [-infinity,01-01-2000)
+ [-infinity,2000-01-01)
 (1 row)
 
 select daterange('2000-01-01'::date, 'infinity'::date, '[)');
        daterange       
 -----------------------
- [01-01-2000,infinity)
+ [2000-01-01,infinity)
 (1 row)
 
 select daterange('2000-01-01'::date, 'infinity'::date, '[]');
        daterange       
 -----------------------
- [01-01-2000,infinity]
+ [2000-01-01,infinity]
 (1 row)
 
 -- test GiST index that's been built incrementally
@@ -1166,13 +1166,13 @@
 insert into test_range_excl
   values(int4range(123, 123, '[]'), int4range(3, 3, '[]'), '[2010-01-02 10:10, 2010-01-02 11:00)');
 ERROR:  conflicting key value violates exclusion constraint "test_range_excl_room_during_excl"
-DETAIL:  Key (room, during)=([123,124), ["Sat Jan 02 10:10:00 2010","Sat Jan 02 11:00:00 2010")) conflicts with existing key (room, during)=([123,124), ["Sat Jan 02 10:00:00 2010","Sat Jan 02 11:00:00 2010")).
+DETAIL:  Key (room, during)=([123,124), ["2010-01-02 10:10:00","2010-01-02 11:00:00")) conflicts with existing key (room, during)=([123,124), ["2010-01-02 10:00:00","2010-01-02 11:00:00")).
 insert into test_range_excl
   values(int4range(124, 124, '[]'), int4range(3, 3, '[]'), '[2010-01-02 10:10, 2010-01-02 11:10)');
 insert into test_range_excl
   values(int4range(125, 125, '[]'), int4range(1, 1, '[]'), '[2010-01-02 10:10, 2010-01-02 11:00)');
 ERROR:  conflicting key value violates exclusion constraint "test_range_excl_speaker_during_excl"
-DETAIL:  Key (speaker, during)=([1,2), ["Sat Jan 02 10:10:00 2010","Sat Jan 02 11:00:00 2010")) conflicts with existing key (speaker, during)=([1,2), ["Sat Jan 02 10:00:00 2010","Sat Jan 02 11:00:00 2010")).
+DETAIL:  Key (speaker, during)=([1,2), ["2010-01-02 10:10:00","2010-01-02 11:00:00")) conflicts with existing key (speaker, during)=([1,2), ["2010-01-02 10:00:00","2010-01-02 11:00:00")).
 -- test bigint ranges
 select int8range(10000000000::int8, 20000000000::int8,'(]');
          int8range         
@@ -1183,9 +1183,9 @@
 -- test tstz ranges
 set timezone to '-08';
 select '[2010-01-01 01:00:00 -05, 2010-01-01 02:00:00 -08)'::tstzrange;
-                            tstzrange                            
------------------------------------------------------------------
- ["Thu Dec 31 22:00:00 2009 -08","Fri Jan 01 02:00:00 2010 -08")
+                      tstzrange                      
+-----------------------------------------------------
+ ["2009-12-31 22:00:00-08","2010-01-01 02:00:00-08")
 (1 row)
 
 -- should fail
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/date.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/date.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/date.out	2019-08-12 14:55:05.422229943 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/date.out	2019-09-05 16:27:39.191595572 -0500
@@ -24,44 +24,44 @@
 SELECT f1 AS "Fifteen" FROM DATE_TBL;
   Fifteen   
 ------------
- 04-09-1957
- 06-13-1957
- 02-28-1996
- 02-29-1996
- 03-01-1996
- 03-02-1996
- 02-28-1997
- 03-01-1997
- 03-02-1997
- 04-01-2000
- 04-02-2000
- 04-03-2000
- 04-08-2038
- 04-09-2039
- 04-10-2040
+ 1957-04-09
+ 1957-06-13
+ 1996-02-28
+ 1996-02-29
+ 1996-03-01
+ 1996-03-02
+ 1997-02-28
+ 1997-03-01
+ 1997-03-02
+ 2000-04-01
+ 2000-04-02
+ 2000-04-03
+ 2038-04-08
+ 2039-04-09
+ 2040-04-10
 (15 rows)
 
 SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
     Nine    
 ------------
- 04-09-1957
- 06-13-1957
- 02-28-1996
- 02-29-1996
- 03-01-1996
- 03-02-1996
- 02-28-1997
- 03-01-1997
- 03-02-1997
+ 1957-04-09
+ 1957-06-13
+ 1996-02-28
+ 1996-02-29
+ 1996-03-01
+ 1996-03-02
+ 1997-02-28
+ 1997-03-01
+ 1997-03-02
 (9 rows)
 
 SELECT f1 AS "Three" FROM DATE_TBL
   WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
    Three    
 ------------
- 04-01-2000
- 04-02-2000
- 04-03-2000
+ 2000-04-01
+ 2000-04-02
+ 2000-04-03
 (3 rows)
 
 --
@@ -1140,63 +1140,63 @@
 -- test trunc function!
 --
 SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
-        date_trunc        
---------------------------
- Thu Jan 01 00:00:00 1001
+     date_trunc      
+---------------------
+ 1001-01-01 00:00:00
 (1 row)
 
 SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
           date_trunc          
 ------------------------------
- Thu Jan 01 00:00:00 1001 PST
+ 1001-01-01 00:00:00-05:19:20
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
-        date_trunc        
---------------------------
- Tue Jan 01 00:00:00 1901
+     date_trunc      
+---------------------
+ 1901-01-01 00:00:00
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
-          date_trunc          
-------------------------------
- Tue Jan 01 00:00:00 1901 PST
+        date_trunc         
+---------------------------
+ 1901-01-01 00:00:00-05:14
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
-          date_trunc          
-------------------------------
- Mon Jan 01 00:00:00 2001 PST
+       date_trunc       
+------------------------
+ 2001-01-01 00:00:00-05
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
           date_trunc          
 ------------------------------
- Mon Jan 01 00:00:00 0001 PST
+ 0001-01-01 00:00:00-05:19:20
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
            date_trunc            
 ---------------------------------
- Tue Jan 01 00:00:00 0100 PST BC
+ 0100-01-01 00:00:00-05:19:20 BC
 (1 row)
 
 SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
-          date_trunc          
-------------------------------
- Mon Jan 01 00:00:00 1990 PST
+       date_trunc       
+------------------------
+ 1990-01-01 00:00:00-05
 (1 row)
 
 SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
            date_trunc            
 ---------------------------------
- Sat Jan 01 00:00:00 0001 PST BC
+ 0001-01-01 00:00:00-05:19:20 BC
 (1 row)
 
 SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
            date_trunc            
 ---------------------------------
- Mon Jan 01 00:00:00 0011 PST BC
+ 0011-01-01 00:00:00-05:19:20 BC
 (1 row)
 
 --
@@ -1448,13 +1448,13 @@
 select make_date(2013, 7, 15);
  make_date  
 ------------
- 07-15-2013
+ 2013-07-15
 (1 row)
 
 select make_date(-44, 3, 15);
    make_date   
 ---------------
- 03-15-0044 BC
+ 0044-03-15 BC
 (1 row)
 
 select make_time(8, 20, 0.0);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamp.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamp.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamp.out	2019-08-12 14:55:05.458232999 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamp.out	2019-09-05 16:27:39.571627919 -0500
@@ -168,80 +168,80 @@
 LINE 1: INSERT INTO TIMESTAMP_TBL VALUES ('Feb 16 17:32:01 5097 BC')...
                                           ^
 SELECT '' AS "64", d1 FROM TIMESTAMP_TBL;
- 64 |             d1              
-----+-----------------------------
+ 64 |           d1           
+----+------------------------
     | -infinity
     | infinity
-    | Thu Jan 01 00:00:00 1970
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 00:00:00 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1970-01-01 00:00:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 00:00:00
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (65 rows)
 
 -- Check behavior at the lower boundary of the timestamp range
 SELECT '4714-11-24 00:00:00 BC'::timestamp;
-          timestamp          
------------------------------
- Mon Nov 24 00:00:00 4714 BC
+       timestamp        
+------------------------
+ 4714-11-24 00:00:00 BC
 (1 row)
 
 SELECT '4714-11-23 23:59:59 BC'::timestamp;  -- out of range
@@ -252,300 +252,300 @@
 -- Demonstrate functions and operators
 SELECT '' AS "48", d1 FROM TIMESTAMP_TBL
    WHERE d1 > timestamp without time zone '1997-01-02';
- 48 |             d1             
-----+----------------------------
+ 48 |          d1           
+----+-----------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (49 rows)
 
 SELECT '' AS "15", d1 FROM TIMESTAMP_TBL
    WHERE d1 < timestamp without time zone '1997-01-02';
- 15 |             d1              
-----+-----------------------------
+ 15 |           d1           
+----+------------------------
     | -infinity
-    | Thu Jan 01 00:00:00 1970
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
+    | 1970-01-01 00:00:00
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
 (15 rows)
 
 SELECT '' AS one, d1 FROM TIMESTAMP_TBL
    WHERE d1 = timestamp without time zone '1997-01-02';
- one |            d1            
------+--------------------------
-     | Thu Jan 02 00:00:00 1997
+ one |         d1          
+-----+---------------------
+     | 1997-01-02 00:00:00
 (1 row)
 
 SELECT '' AS "63", d1 FROM TIMESTAMP_TBL
    WHERE d1 != timestamp without time zone '1997-01-02';
- 63 |             d1              
-----+-----------------------------
+ 63 |           d1           
+----+------------------------
     | -infinity
     | infinity
-    | Thu Jan 01 00:00:00 1970
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1970-01-01 00:00:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (64 rows)
 
 SELECT '' AS "16", d1 FROM TIMESTAMP_TBL
    WHERE d1 <= timestamp without time zone '1997-01-02';
- 16 |             d1              
-----+-----------------------------
+ 16 |           d1           
+----+------------------------
     | -infinity
-    | Thu Jan 01 00:00:00 1970
-    | Thu Jan 02 00:00:00 1997
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
+    | 1970-01-01 00:00:00
+    | 1997-01-02 00:00:00
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
 (16 rows)
 
 SELECT '' AS "49", d1 FROM TIMESTAMP_TBL
    WHERE d1 >= timestamp without time zone '1997-01-02';
- 49 |             d1             
-----+----------------------------
+ 49 |          d1           
+----+-----------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 00:00:00 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 00:00:00
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (50 rows)
 
 SELECT '' AS "54", d1 - timestamp without time zone '1997-01-02' AS diff
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 1724 days 18 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 13 hours 14 mins 2 secs
-    | @ 1168 days 12 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 2 hours 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 18 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |        diff         
+----+---------------------
+    | -9863 days
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:02
+    | 39 days 17:32:01.4
+    | 39 days 17:32:01.5
+    | 39 days 17:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 17:32:01
+    | 1724 days 18:19:20
+    | 1168 days 08:14:01
+    | 1168 days 13:14:02
+    | 1168 days 12:14:03
+    | 1168 days 03:14:04
+    | 1168 days 02:14:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 273 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 18:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (55 rows)
 
 SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp '2004-02-29 15:44:17.71393' ) AS week_trunc;
- date_trunc_week |        week_trunc        
------------------+--------------------------
-                 | Mon Feb 23 00:00:00 2004
+ date_trunc_week |     week_trunc      
+-----------------+---------------------
+                 | 2004-02-23 00:00:00
 (1 row)
 
 -- Test casting within a BETWEEN qualifier
@@ -553,63 +553,63 @@
   FROM TIMESTAMP_TBL
   WHERE d1 BETWEEN timestamp without time zone '1902-01-01'
    AND timestamp without time zone '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 1724 days 18 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 13 hours 14 mins 2 secs
-    | @ 1168 days 12 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 2 hours 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 18 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |        diff         
+----+---------------------
+    | -9863 days
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:02
+    | 39 days 17:32:01.4
+    | 39 days 17:32:01.5
+    | 39 days 17:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 17:32:01
+    | 1724 days 18:19:20
+    | 1168 days 08:14:01
+    | 1168 days 13:14:02
+    | 1168 days 12:14:03
+    | 1168 days 03:14:04
+    | 1168 days 02:14:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 273 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 18:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (55 rows)
 
 SELECT '' AS "54", d1 as "timestamp",
@@ -617,189 +617,189 @@
    date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
    date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |         timestamp          | year | month | day | hour | minute | second 
-----+----------------------------+------+-------+-----+------+--------+--------
-    | Thu Jan 01 00:00:00 1970   | 1970 |     1 |   1 |    0 |      0 |      0
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:02 1997   | 1997 |     2 |  10 |   17 |     32 |      2
-    | Mon Feb 10 17:32:01.4 1997 | 1997 |     2 |  10 |   17 |     32 |    1.4
-    | Mon Feb 10 17:32:01.5 1997 | 1997 |     2 |  10 |   17 |     32 |    1.5
-    | Mon Feb 10 17:32:01.6 1997 | 1997 |     2 |  10 |   17 |     32 |    1.6
-    | Thu Jan 02 00:00:00 1997   | 1997 |     1 |   2 |    0 |      0 |      0
-    | Thu Jan 02 03:04:05 1997   | 1997 |     1 |   2 |    3 |      4 |      5
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Jun 10 17:32:01 1997   | 1997 |     6 |  10 |   17 |     32 |      1
-    | Sat Sep 22 18:19:20 2001   | 2001 |     9 |  22 |   18 |     19 |     20
-    | Wed Mar 15 08:14:01 2000   | 2000 |     3 |  15 |    8 |     14 |      1
-    | Wed Mar 15 13:14:02 2000   | 2000 |     3 |  15 |   13 |     14 |      2
-    | Wed Mar 15 12:14:03 2000   | 2000 |     3 |  15 |   12 |     14 |      3
-    | Wed Mar 15 03:14:04 2000   | 2000 |     3 |  15 |    3 |     14 |      4
-    | Wed Mar 15 02:14:05 2000   | 2000 |     3 |  15 |    2 |     14 |      5
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:00 1997   | 1997 |     2 |  10 |   17 |     32 |      0
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Jun 10 18:32:01 1997   | 1997 |     6 |  10 |   18 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Feb 11 17:32:01 1997   | 1997 |     2 |  11 |   17 |     32 |      1
-    | Wed Feb 12 17:32:01 1997   | 1997 |     2 |  12 |   17 |     32 |      1
-    | Thu Feb 13 17:32:01 1997   | 1997 |     2 |  13 |   17 |     32 |      1
-    | Fri Feb 14 17:32:01 1997   | 1997 |     2 |  14 |   17 |     32 |      1
-    | Sat Feb 15 17:32:01 1997   | 1997 |     2 |  15 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Wed Feb 28 17:32:01 1996   | 1996 |     2 |  28 |   17 |     32 |      1
-    | Thu Feb 29 17:32:01 1996   | 1996 |     2 |  29 |   17 |     32 |      1
-    | Fri Mar 01 17:32:01 1996   | 1996 |     3 |   1 |   17 |     32 |      1
-    | Mon Dec 30 17:32:01 1996   | 1996 |    12 |  30 |   17 |     32 |      1
-    | Tue Dec 31 17:32:01 1996   | 1996 |    12 |  31 |   17 |     32 |      1
-    | Wed Jan 01 17:32:01 1997   | 1997 |     1 |   1 |   17 |     32 |      1
-    | Fri Feb 28 17:32:01 1997   | 1997 |     2 |  28 |   17 |     32 |      1
-    | Sat Mar 01 17:32:01 1997   | 1997 |     3 |   1 |   17 |     32 |      1
-    | Tue Dec 30 17:32:01 1997   | 1997 |    12 |  30 |   17 |     32 |      1
-    | Wed Dec 31 17:32:01 1997   | 1997 |    12 |  31 |   17 |     32 |      1
-    | Fri Dec 31 17:32:01 1999   | 1999 |    12 |  31 |   17 |     32 |      1
-    | Sat Jan 01 17:32:01 2000   | 2000 |     1 |   1 |   17 |     32 |      1
-    | Sun Dec 31 17:32:01 2000   | 2000 |    12 |  31 |   17 |     32 |      1
-    | Mon Jan 01 17:32:01 2001   | 2001 |     1 |   1 |   17 |     32 |      1
+ 54 |       timestamp       | year | month | day | hour | minute | second 
+----+-----------------------+------+-------+-----+------+--------+--------
+    | 1970-01-01 00:00:00   | 1970 |     1 |   1 |    0 |      0 |      0
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:02   | 1997 |     2 |  10 |   17 |     32 |      2
+    | 1997-02-10 17:32:01.4 | 1997 |     2 |  10 |   17 |     32 |    1.4
+    | 1997-02-10 17:32:01.5 | 1997 |     2 |  10 |   17 |     32 |    1.5
+    | 1997-02-10 17:32:01.6 | 1997 |     2 |  10 |   17 |     32 |    1.6
+    | 1997-01-02 00:00:00   | 1997 |     1 |   2 |    0 |      0 |      0
+    | 1997-01-02 03:04:05   | 1997 |     1 |   2 |    3 |      4 |      5
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-06-10 17:32:01   | 1997 |     6 |  10 |   17 |     32 |      1
+    | 2001-09-22 18:19:20   | 2001 |     9 |  22 |   18 |     19 |     20
+    | 2000-03-15 08:14:01   | 2000 |     3 |  15 |    8 |     14 |      1
+    | 2000-03-15 13:14:02   | 2000 |     3 |  15 |   13 |     14 |      2
+    | 2000-03-15 12:14:03   | 2000 |     3 |  15 |   12 |     14 |      3
+    | 2000-03-15 03:14:04   | 2000 |     3 |  15 |    3 |     14 |      4
+    | 2000-03-15 02:14:05   | 2000 |     3 |  15 |    2 |     14 |      5
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:00   | 1997 |     2 |  10 |   17 |     32 |      0
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-10-02 17:32:01   | 1997 |    10 |   2 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-06-10 18:32:01   | 1997 |     6 |  10 |   18 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-11 17:32:01   | 1997 |     2 |  11 |   17 |     32 |      1
+    | 1997-02-12 17:32:01   | 1997 |     2 |  12 |   17 |     32 |      1
+    | 1997-02-13 17:32:01   | 1997 |     2 |  13 |   17 |     32 |      1
+    | 1997-02-14 17:32:01   | 1997 |     2 |  14 |   17 |     32 |      1
+    | 1997-02-15 17:32:01   | 1997 |     2 |  15 |   17 |     32 |      1
+    | 1997-02-16 17:32:01   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1997-02-16 17:32:01   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1996-02-28 17:32:01   | 1996 |     2 |  28 |   17 |     32 |      1
+    | 1996-02-29 17:32:01   | 1996 |     2 |  29 |   17 |     32 |      1
+    | 1996-03-01 17:32:01   | 1996 |     3 |   1 |   17 |     32 |      1
+    | 1996-12-30 17:32:01   | 1996 |    12 |  30 |   17 |     32 |      1
+    | 1996-12-31 17:32:01   | 1996 |    12 |  31 |   17 |     32 |      1
+    | 1997-01-01 17:32:01   | 1997 |     1 |   1 |   17 |     32 |      1
+    | 1997-02-28 17:32:01   | 1997 |     2 |  28 |   17 |     32 |      1
+    | 1997-03-01 17:32:01   | 1997 |     3 |   1 |   17 |     32 |      1
+    | 1997-12-30 17:32:01   | 1997 |    12 |  30 |   17 |     32 |      1
+    | 1997-12-31 17:32:01   | 1997 |    12 |  31 |   17 |     32 |      1
+    | 1999-12-31 17:32:01   | 1999 |    12 |  31 |   17 |     32 |      1
+    | 2000-01-01 17:32:01   | 2000 |     1 |   1 |   17 |     32 |      1
+    | 2000-12-31 17:32:01   | 2000 |    12 |  31 |   17 |     32 |      1
+    | 2001-01-01 17:32:01   | 2001 |     1 |   1 |   17 |     32 |      1
 (55 rows)
 
 SELECT '' AS "54", d1 as "timestamp",
    date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
    date_part( 'usec', d1) AS usec
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |         timestamp          | quarter | msec  |   usec   
-----+----------------------------+---------+-------+----------
-    | Thu Jan 01 00:00:00 1970   |       1 |     0 |        0
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:02 1997   |       1 |  2000 |  2000000
-    | Mon Feb 10 17:32:01.4 1997 |       1 |  1400 |  1400000
-    | Mon Feb 10 17:32:01.5 1997 |       1 |  1500 |  1500000
-    | Mon Feb 10 17:32:01.6 1997 |       1 |  1600 |  1600000
-    | Thu Jan 02 00:00:00 1997   |       1 |     0 |        0
-    | Thu Jan 02 03:04:05 1997   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Jun 10 17:32:01 1997   |       2 |  1000 |  1000000
-    | Sat Sep 22 18:19:20 2001   |       3 | 20000 | 20000000
-    | Wed Mar 15 08:14:01 2000   |       1 |  1000 |  1000000
-    | Wed Mar 15 13:14:02 2000   |       1 |  2000 |  2000000
-    | Wed Mar 15 12:14:03 2000   |       1 |  3000 |  3000000
-    | Wed Mar 15 03:14:04 2000   |       1 |  4000 |  4000000
-    | Wed Mar 15 02:14:05 2000   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:00 1997   |       1 |     0 |        0
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Jun 10 18:32:01 1997   |       2 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Feb 11 17:32:01 1997   |       1 |  1000 |  1000000
-    | Wed Feb 12 17:32:01 1997   |       1 |  1000 |  1000000
-    | Thu Feb 13 17:32:01 1997   |       1 |  1000 |  1000000
-    | Fri Feb 14 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sat Feb 15 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997   |       1 |  1000 |  1000000
-    | Wed Feb 28 17:32:01 1996   |       1 |  1000 |  1000000
-    | Thu Feb 29 17:32:01 1996   |       1 |  1000 |  1000000
-    | Fri Mar 01 17:32:01 1996   |       1 |  1000 |  1000000
-    | Mon Dec 30 17:32:01 1996   |       4 |  1000 |  1000000
-    | Tue Dec 31 17:32:01 1996   |       4 |  1000 |  1000000
-    | Wed Jan 01 17:32:01 1997   |       1 |  1000 |  1000000
-    | Fri Feb 28 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sat Mar 01 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Dec 30 17:32:01 1997   |       4 |  1000 |  1000000
-    | Wed Dec 31 17:32:01 1997   |       4 |  1000 |  1000000
-    | Fri Dec 31 17:32:01 1999   |       4 |  1000 |  1000000
-    | Sat Jan 01 17:32:01 2000   |       1 |  1000 |  1000000
-    | Sun Dec 31 17:32:01 2000   |       4 |  1000 |  1000000
-    | Mon Jan 01 17:32:01 2001   |       1 |  1000 |  1000000
+ 54 |       timestamp       | quarter | msec  |   usec   
+----+-----------------------+---------+-------+----------
+    | 1970-01-01 00:00:00   |       1 |     0 |        0
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:02   |       1 |  2000 |  2000000
+    | 1997-02-10 17:32:01.4 |       1 |  1400 |  1400000
+    | 1997-02-10 17:32:01.5 |       1 |  1500 |  1500000
+    | 1997-02-10 17:32:01.6 |       1 |  1600 |  1600000
+    | 1997-01-02 00:00:00   |       1 |     0 |        0
+    | 1997-01-02 03:04:05   |       1 |  5000 |  5000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-06-10 17:32:01   |       2 |  1000 |  1000000
+    | 2001-09-22 18:19:20   |       3 | 20000 | 20000000
+    | 2000-03-15 08:14:01   |       1 |  1000 |  1000000
+    | 2000-03-15 13:14:02   |       1 |  2000 |  2000000
+    | 2000-03-15 12:14:03   |       1 |  3000 |  3000000
+    | 2000-03-15 03:14:04   |       1 |  4000 |  4000000
+    | 2000-03-15 02:14:05   |       1 |  5000 |  5000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:00   |       1 |     0 |        0
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-10-02 17:32:01   |       4 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-06-10 18:32:01   |       2 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-11 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-12 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-13 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-14 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-15 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01   |       1 |  1000 |  1000000
+    | 1996-02-28 17:32:01   |       1 |  1000 |  1000000
+    | 1996-02-29 17:32:01   |       1 |  1000 |  1000000
+    | 1996-03-01 17:32:01   |       1 |  1000 |  1000000
+    | 1996-12-30 17:32:01   |       4 |  1000 |  1000000
+    | 1996-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 1997-01-01 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-28 17:32:01   |       1 |  1000 |  1000000
+    | 1997-03-01 17:32:01   |       1 |  1000 |  1000000
+    | 1997-12-30 17:32:01   |       4 |  1000 |  1000000
+    | 1997-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 1999-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 2000-01-01 17:32:01   |       1 |  1000 |  1000000
+    | 2000-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 2001-01-01 17:32:01   |       1 |  1000 |  1000000
 (55 rows)
 
 SELECT '' AS "54", d1 as "timestamp",
    date_part( 'isoyear', d1) AS isoyear, date_part( 'week', d1) AS week,
    date_part( 'dow', d1) AS dow
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |         timestamp          | isoyear | week | dow 
-----+----------------------------+---------+------+-----
-    | Thu Jan 01 00:00:00 1970   |    1970 |    1 |   4
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:02 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.4 1997 |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.5 1997 |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.6 1997 |    1997 |    7 |   1
-    | Thu Jan 02 00:00:00 1997   |    1997 |    1 |   4
-    | Thu Jan 02 03:04:05 1997   |    1997 |    1 |   4
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Tue Jun 10 17:32:01 1997   |    1997 |   24 |   2
-    | Sat Sep 22 18:19:20 2001   |    2001 |   38 |   6
-    | Wed Mar 15 08:14:01 2000   |    2000 |   11 |   3
-    | Wed Mar 15 13:14:02 2000   |    2000 |   11 |   3
-    | Wed Mar 15 12:14:03 2000   |    2000 |   11 |   3
-    | Wed Mar 15 03:14:04 2000   |    2000 |   11 |   3
-    | Wed Mar 15 02:14:05 2000   |    2000 |   11 |   3
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:00 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Tue Jun 10 18:32:01 1997   |    1997 |   24 |   2
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Tue Feb 11 17:32:01 1997   |    1997 |    7 |   2
-    | Wed Feb 12 17:32:01 1997   |    1997 |    7 |   3
-    | Thu Feb 13 17:32:01 1997   |    1997 |    7 |   4
-    | Fri Feb 14 17:32:01 1997   |    1997 |    7 |   5
-    | Sat Feb 15 17:32:01 1997   |    1997 |    7 |   6
-    | Sun Feb 16 17:32:01 1997   |    1997 |    7 |   0
-    | Sun Feb 16 17:32:01 1997   |    1997 |    7 |   0
-    | Wed Feb 28 17:32:01 1996   |    1996 |    9 |   3
-    | Thu Feb 29 17:32:01 1996   |    1996 |    9 |   4
-    | Fri Mar 01 17:32:01 1996   |    1996 |    9 |   5
-    | Mon Dec 30 17:32:01 1996   |    1997 |    1 |   1
-    | Tue Dec 31 17:32:01 1996   |    1997 |    1 |   2
-    | Wed Jan 01 17:32:01 1997   |    1997 |    1 |   3
-    | Fri Feb 28 17:32:01 1997   |    1997 |    9 |   5
-    | Sat Mar 01 17:32:01 1997   |    1997 |    9 |   6
-    | Tue Dec 30 17:32:01 1997   |    1998 |    1 |   2
-    | Wed Dec 31 17:32:01 1997   |    1998 |    1 |   3
-    | Fri Dec 31 17:32:01 1999   |    1999 |   52 |   5
-    | Sat Jan 01 17:32:01 2000   |    1999 |   52 |   6
-    | Sun Dec 31 17:32:01 2000   |    2000 |   52 |   0
-    | Mon Jan 01 17:32:01 2001   |    2001 |    1 |   1
+ 54 |       timestamp       | isoyear | week | dow 
+----+-----------------------+---------+------+-----
+    | 1970-01-01 00:00:00   |    1970 |    1 |   4
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:02   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01.4 |    1997 |    7 |   1
+    | 1997-02-10 17:32:01.5 |    1997 |    7 |   1
+    | 1997-02-10 17:32:01.6 |    1997 |    7 |   1
+    | 1997-01-02 00:00:00   |    1997 |    1 |   4
+    | 1997-01-02 03:04:05   |    1997 |    1 |   4
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-06-10 17:32:01   |    1997 |   24 |   2
+    | 2001-09-22 18:19:20   |    2001 |   38 |   6
+    | 2000-03-15 08:14:01   |    2000 |   11 |   3
+    | 2000-03-15 13:14:02   |    2000 |   11 |   3
+    | 2000-03-15 12:14:03   |    2000 |   11 |   3
+    | 2000-03-15 03:14:04   |    2000 |   11 |   3
+    | 2000-03-15 02:14:05   |    2000 |   11 |   3
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:00   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-10-02 17:32:01   |    1997 |   40 |   4
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-06-10 18:32:01   |    1997 |   24 |   2
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-11 17:32:01   |    1997 |    7 |   2
+    | 1997-02-12 17:32:01   |    1997 |    7 |   3
+    | 1997-02-13 17:32:01   |    1997 |    7 |   4
+    | 1997-02-14 17:32:01   |    1997 |    7 |   5
+    | 1997-02-15 17:32:01   |    1997 |    7 |   6
+    | 1997-02-16 17:32:01   |    1997 |    7 |   0
+    | 1997-02-16 17:32:01   |    1997 |    7 |   0
+    | 1996-02-28 17:32:01   |    1996 |    9 |   3
+    | 1996-02-29 17:32:01   |    1996 |    9 |   4
+    | 1996-03-01 17:32:01   |    1996 |    9 |   5
+    | 1996-12-30 17:32:01   |    1997 |    1 |   1
+    | 1996-12-31 17:32:01   |    1997 |    1 |   2
+    | 1997-01-01 17:32:01   |    1997 |    1 |   3
+    | 1997-02-28 17:32:01   |    1997 |    9 |   5
+    | 1997-03-01 17:32:01   |    1997 |    9 |   6
+    | 1997-12-30 17:32:01   |    1998 |    1 |   2
+    | 1997-12-31 17:32:01   |    1998 |    1 |   3
+    | 1999-12-31 17:32:01   |    1999 |   52 |   5
+    | 2000-01-01 17:32:01   |    1999 |   52 |   6
+    | 2000-12-31 17:32:01   |    2000 |   52 |   0
+    | 2001-01-01 17:32:01   |    2001 |    1 |   1
 (55 rows)
 
 -- TO_CHAR()
@@ -835,7 +835,7 @@
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
-           | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
+           | THURSDAY  Thursday  thursday  THU Thu thu OCTOBER   October   october   X    OCT Oct oct
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
@@ -906,7 +906,7 @@
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
-           | MONDAY Monday monday FEBRUARY February february II
+           | THURSDAY Thursday thursday OCTOBER October october X
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
@@ -977,7 +977,7 @@
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 02 5 2450724
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
@@ -1048,7 +1048,7 @@
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 2 5 2450724
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
@@ -1332,7 +1332,7 @@
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
-           | 1997TH 1997th 2450490th
+           | 1997TH 1997th 2450724th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
@@ -1474,7 +1474,7 @@
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
-            | 1997 997 97 7 07 043 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
@@ -1545,7 +1545,7 @@
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
-            | 1997 997 97 7 7 43 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
@@ -1586,8 +1586,8 @@
 
 -- timestamp numeric fields constructor
 SELECT make_timestamp(2014,12,28,6,30,45.887);
-        make_timestamp        
-------------------------------
- Sun Dec 28 06:30:45.887 2014
+     make_timestamp      
+-------------------------
+ 2014-12-28 06:30:45.887
 (1 row)
 
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamptz.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamptz.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamptz.out	2019-08-12 14:55:05.462233339 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamptz.out	2019-09-05 16:27:39.651634729 -0500
@@ -33,7 +33,7 @@
 SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'tomorrow';
  one 
 -----
-   1
+   2
 (1 row)
 
 SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'yesterday';
@@ -118,16 +118,16 @@
 -- timestamps at different timezones
 INSERT INTO TIMESTAMPTZ_TBL VALUES ('19970210 173201 America/New_York');
 SELECT '19970210 173201' AT TIME ZONE 'America/New_York';
-         timezone         
---------------------------
- Mon Feb 10 20:32:01 1997
+      timezone       
+---------------------
+ 1997-02-10 17:32:01
 (1 row)
 
 INSERT INTO TIMESTAMPTZ_TBL VALUES ('19970710 173201 America/New_York');
 SELECT '19970710 173201' AT TIME ZONE 'America/New_York';
-         timezone         
---------------------------
- Thu Jul 10 20:32:01 1997
+      timezone       
+---------------------
+ 1997-07-10 18:32:01
 (1 row)
 
 INSERT INTO TIMESTAMPTZ_TBL VALUES ('19970710 173201 America/Does_not_exist');
@@ -138,27 +138,27 @@
 ERROR:  time zone "America/Does_not_exist" not recognized
 -- Daylight saving time for timestamps beyond 32-bit time_t range.
 SELECT '20500710 173201 Europe/Helsinki'::timestamptz; -- DST
-         timestamptz          
-------------------------------
- Sun Jul 10 07:32:01 2050 PDT
+      timestamptz       
+------------------------
+ 2050-07-10 09:32:01-05
 (1 row)
 
 SELECT '20500110 173201 Europe/Helsinki'::timestamptz; -- non-DST
-         timestamptz          
-------------------------------
- Mon Jan 10 07:32:01 2050 PST
+      timestamptz       
+------------------------
+ 2050-01-10 10:32:01-05
 (1 row)
 
 SELECT '205000-07-10 17:32:01 Europe/Helsinki'::timestamptz; -- DST
-          timestamptz           
---------------------------------
- Thu Jul 10 07:32:01 205000 PDT
+       timestamptz        
+--------------------------
+ 205000-07-10 09:32:01-05
 (1 row)
 
 SELECT '205000-01-10 17:32:01 Europe/Helsinki'::timestamptz; -- non-DST
-          timestamptz           
---------------------------------
- Fri Jan 10 07:32:01 205000 PST
+       timestamptz        
+--------------------------
+ 205000-01-10 10:32:01-05
 (1 row)
 
 -- Check date conversion and date arithmetic
@@ -209,33 +209,33 @@
 -- Alternative field order that we've historically supported (sort of)
 -- with regular and POSIXy timezone specs
 SELECT 'Wed Jul 11 10:51:14 America/New_York 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 07:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 09:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 GMT-4 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Tue Jul 10 23:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 01:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 GMT+4 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 07:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 09:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 PST-03:00 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 00:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 02:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 PST+03:00 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 06:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 08:51:14-05
 (1 row)
 
 SELECT '' AS "64", d1 FROM TIMESTAMPTZ_TBL;
@@ -243,89 +243,89 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 00:00:00 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1969-12-31 19:00:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 00:00:00-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (66 rows)
 
 -- Check behavior at the lower boundary of the timestamp range
 SELECT '4714-11-24 00:00:00+00 BC'::timestamptz;
            timestamptz           
 ---------------------------------
- Sun Nov 23 16:00:00 4714 PST BC
+ 4714-11-23 18:40:40-05:19:20 BC
 (1 row)
 
 SELECT '4714-11-23 16:00:00-08 BC'::timestamptz;
            timestamptz           
 ---------------------------------
- Sun Nov 23 16:00:00 4714 PST BC
+ 4714-11-23 18:40:40-05:19:20 BC
 (1 row)
 
 SELECT 'Sun Nov 23 16:00:00 4714 PST BC'::timestamptz;
            timestamptz           
 ---------------------------------
- Sun Nov 23 16:00:00 4714 PST BC
+ 4714-11-23 18:40:40-05:19:20 BC
 (1 row)
 
 SELECT '4714-11-23 23:59:59+00 BC'::timestamptz;  -- out of range
@@ -336,58 +336,58 @@
 -- Demonstrate functions and operators
 SELECT '' AS "48", d1 FROM TIMESTAMPTZ_TBL
    WHERE d1 > timestamp with time zone '1997-01-02';
- 48 |               d1               
-----+--------------------------------
+ 48 |            d1            
+----+--------------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (50 rows)
 
 SELECT '' AS "15", d1 FROM TIMESTAMPTZ_TBL
@@ -395,27 +395,27 @@
  15 |               d1                
 ----+---------------------------------
     | -infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
+    | 1969-12-31 19:00:00-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
 (15 rows)
 
 SELECT '' AS one, d1 FROM TIMESTAMPTZ_TBL
    WHERE d1 = timestamp with time zone '1997-01-02';
- one |              d1              
------+------------------------------
-     | Thu Jan 02 00:00:00 1997 PST
+ one |           d1           
+-----+------------------------
+     | 1997-01-02 00:00:00-05
 (1 row)
 
 SELECT '' AS "63", d1 FROM TIMESTAMPTZ_TBL
@@ -424,69 +424,69 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1969-12-31 19:00:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (65 rows)
 
 SELECT '' AS "16", d1 FROM TIMESTAMPTZ_TBL
@@ -494,228 +494,228 @@
  16 |               d1                
 ----+---------------------------------
     | -infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Thu Jan 02 00:00:00 1997 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
+    | 1969-12-31 19:00:00-05
+    | 1997-01-02 00:00:00-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
 (16 rows)
 
 SELECT '' AS "49", d1 FROM TIMESTAMPTZ_TBL
    WHERE d1 >= timestamp with time zone '1997-01-02';
- 49 |               d1               
-----+--------------------------------
+ 49 |            d1            
+----+--------------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 00:00:00 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 00:00:00-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (51 rows)
 
 SELECT '' AS "54", d1 - timestamp with time zone '1997-01-02' AS diff
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days 8 hours ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 16 hours 32 mins 1 sec
-    | @ 1724 days 17 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 4 hours 14 mins 2 secs
-    | @ 1168 days 2 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 1 hour 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 14 hours 32 mins 1 sec
-    | @ 189 days 13 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |         diff         
+----+----------------------
+    | -9863 days -05:00:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:02
+    | 39 days 20:32:01.4
+    | 39 days 20:32:01.5
+    | 39 days 20:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 159 days 19:32:01
+    | 1724 days 18:19:20
+    | 1168 days 11:14:01
+    | 1168 days 07:14:02
+    | 1168 days 05:14:03
+    | 1168 days 06:14:04
+    | 1168 days 04:14:05
+    | 39 days 20:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 273 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 17:32:01
+    | 189 days 16:32:01
+    | 159 days 20:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (56 rows)
 
 SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp with time zone '2004-02-29 15:44:17.71393' ) AS week_trunc;
- date_trunc_week |          week_trunc          
------------------+------------------------------
-                 | Mon Feb 23 00:00:00 2004 PST
+ date_trunc_week |       week_trunc       
+-----------------+------------------------
+                 | 2004-02-23 00:00:00-05
 (1 row)
 
 SELECT '' AS date_trunc_at_tz, date_trunc('day', timestamp with time zone '2001-02-16 20:38:40+00', 'Australia/Sydney') as sydney_trunc;  -- zone name
- date_trunc_at_tz |         sydney_trunc         
-------------------+------------------------------
-                  | Fri Feb 16 05:00:00 2001 PST
+ date_trunc_at_tz |      sydney_trunc      
+------------------+------------------------
+                  | 2001-02-16 08:00:00-05
 (1 row)
 
 SELECT '' AS date_trunc_at_tz, date_trunc('day', timestamp with time zone '2001-02-16 20:38:40+00', 'GMT') as gmt_trunc;  -- fixed-offset abbreviation
- date_trunc_at_tz |          gmt_trunc           
-------------------+------------------------------
-                  | Thu Feb 15 16:00:00 2001 PST
+ date_trunc_at_tz |       gmt_trunc        
+------------------+------------------------
+                  | 2001-02-15 19:00:00-05
 (1 row)
 
 SELECT '' AS date_trunc_at_tz, date_trunc('day', timestamp with time zone '2001-02-16 20:38:40+00', 'VET') as vet_trunc;  -- variable-offset abbreviation
- date_trunc_at_tz |          vet_trunc           
-------------------+------------------------------
-                  | Thu Feb 15 20:00:00 2001 PST
+ date_trunc_at_tz |       vet_trunc        
+------------------+------------------------
+                  | 2001-02-15 23:00:00-05
 (1 row)
 
 -- Test casting within a BETWEEN qualifier
 SELECT '' AS "54", d1 - timestamp with time zone '1997-01-02' AS diff
   FROM TIMESTAMPTZ_TBL
   WHERE d1 BETWEEN timestamp with time zone '1902-01-01' AND timestamp with time zone '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days 8 hours ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 16 hours 32 mins 1 sec
-    | @ 1724 days 17 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 4 hours 14 mins 2 secs
-    | @ 1168 days 2 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 1 hour 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 14 hours 32 mins 1 sec
-    | @ 189 days 13 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |         diff         
+----+----------------------
+    | -9863 days -05:00:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:02
+    | 39 days 20:32:01.4
+    | 39 days 20:32:01.5
+    | 39 days 20:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 159 days 19:32:01
+    | 1724 days 18:19:20
+    | 1168 days 11:14:01
+    | 1168 days 07:14:02
+    | 1168 days 05:14:03
+    | 1168 days 06:14:04
+    | 1168 days 04:14:05
+    | 39 days 20:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 273 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 17:32:01
+    | 189 days 16:32:01
+    | 159 days 20:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (56 rows)
 
 SELECT '' AS "54", d1 as timestamptz,
@@ -723,192 +723,192 @@
    date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
    date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |          timestamptz           | year | month | day | hour | minute | second 
-----+--------------------------------+------+-------+-----+------+--------+--------
-    | Wed Dec 31 16:00:00 1969 PST   | 1969 |    12 |  31 |   16 |      0 |      0
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:02 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      2
-    | Mon Feb 10 17:32:01.4 1997 PST | 1997 |     2 |  10 |   17 |     32 |    1.4
-    | Mon Feb 10 17:32:01.5 1997 PST | 1997 |     2 |  10 |   17 |     32 |    1.5
-    | Mon Feb 10 17:32:01.6 1997 PST | 1997 |     2 |  10 |   17 |     32 |    1.6
-    | Thu Jan 02 00:00:00 1997 PST   | 1997 |     1 |   2 |    0 |      0 |      0
-    | Thu Jan 02 03:04:05 1997 PST   | 1997 |     1 |   2 |    3 |      4 |      5
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Jun 10 17:32:01 1997 PDT   | 1997 |     6 |  10 |   17 |     32 |      1
-    | Sat Sep 22 18:19:20 2001 PDT   | 2001 |     9 |  22 |   18 |     19 |     20
-    | Wed Mar 15 08:14:01 2000 PST   | 2000 |     3 |  15 |    8 |     14 |      1
-    | Wed Mar 15 04:14:02 2000 PST   | 2000 |     3 |  15 |    4 |     14 |      2
-    | Wed Mar 15 02:14:03 2000 PST   | 2000 |     3 |  15 |    2 |     14 |      3
-    | Wed Mar 15 03:14:04 2000 PST   | 2000 |     3 |  15 |    3 |     14 |      4
-    | Wed Mar 15 01:14:05 2000 PST   | 2000 |     3 |  15 |    1 |     14 |      5
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:00 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      0
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 09:32:01 1997 PST   | 1997 |     2 |  10 |    9 |     32 |      1
-    | Mon Feb 10 09:32:01 1997 PST   | 1997 |     2 |  10 |    9 |     32 |      1
-    | Mon Feb 10 09:32:01 1997 PST   | 1997 |     2 |  10 |    9 |     32 |      1
-    | Mon Feb 10 14:32:01 1997 PST   | 1997 |     2 |  10 |   14 |     32 |      1
-    | Thu Jul 10 14:32:01 1997 PDT   | 1997 |     7 |  10 |   14 |     32 |      1
-    | Tue Jun 10 18:32:01 1997 PDT   | 1997 |     6 |  10 |   18 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Feb 11 17:32:01 1997 PST   | 1997 |     2 |  11 |   17 |     32 |      1
-    | Wed Feb 12 17:32:01 1997 PST   | 1997 |     2 |  12 |   17 |     32 |      1
-    | Thu Feb 13 17:32:01 1997 PST   | 1997 |     2 |  13 |   17 |     32 |      1
-    | Fri Feb 14 17:32:01 1997 PST   | 1997 |     2 |  14 |   17 |     32 |      1
-    | Sat Feb 15 17:32:01 1997 PST   | 1997 |     2 |  15 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997 PST   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997 PST   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Wed Feb 28 17:32:01 1996 PST   | 1996 |     2 |  28 |   17 |     32 |      1
-    | Thu Feb 29 17:32:01 1996 PST   | 1996 |     2 |  29 |   17 |     32 |      1
-    | Fri Mar 01 17:32:01 1996 PST   | 1996 |     3 |   1 |   17 |     32 |      1
-    | Mon Dec 30 17:32:01 1996 PST   | 1996 |    12 |  30 |   17 |     32 |      1
-    | Tue Dec 31 17:32:01 1996 PST   | 1996 |    12 |  31 |   17 |     32 |      1
-    | Wed Jan 01 17:32:01 1997 PST   | 1997 |     1 |   1 |   17 |     32 |      1
-    | Fri Feb 28 17:32:01 1997 PST   | 1997 |     2 |  28 |   17 |     32 |      1
-    | Sat Mar 01 17:32:01 1997 PST   | 1997 |     3 |   1 |   17 |     32 |      1
-    | Tue Dec 30 17:32:01 1997 PST   | 1997 |    12 |  30 |   17 |     32 |      1
-    | Wed Dec 31 17:32:01 1997 PST   | 1997 |    12 |  31 |   17 |     32 |      1
-    | Fri Dec 31 17:32:01 1999 PST   | 1999 |    12 |  31 |   17 |     32 |      1
-    | Sat Jan 01 17:32:01 2000 PST   | 2000 |     1 |   1 |   17 |     32 |      1
-    | Sun Dec 31 17:32:01 2000 PST   | 2000 |    12 |  31 |   17 |     32 |      1
-    | Mon Jan 01 17:32:01 2001 PST   | 2001 |     1 |   1 |   17 |     32 |      1
+ 54 |       timestamptz        | year | month | day | hour | minute | second 
+----+--------------------------+------+-------+-----+------+--------+--------
+    | 1969-12-31 19:00:00-05   | 1969 |    12 |  31 |   19 |      0 |      0
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:02-05   | 1997 |     2 |  10 |   20 |     32 |      2
+    | 1997-02-10 20:32:01.4-05 | 1997 |     2 |  10 |   20 |     32 |    1.4
+    | 1997-02-10 20:32:01.5-05 | 1997 |     2 |  10 |   20 |     32 |    1.5
+    | 1997-02-10 20:32:01.6-05 | 1997 |     2 |  10 |   20 |     32 |    1.6
+    | 1997-01-02 00:00:00-05   | 1997 |     1 |   2 |    0 |      0 |      0
+    | 1997-01-02 03:04:05-05   | 1997 |     1 |   2 |    3 |      4 |      5
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-06-10 19:32:01-05   | 1997 |     6 |  10 |   19 |     32 |      1
+    | 2001-09-22 18:19:20-05   | 2001 |     9 |  22 |   18 |     19 |     20
+    | 2000-03-15 11:14:01-05   | 2000 |     3 |  15 |   11 |     14 |      1
+    | 2000-03-15 07:14:02-05   | 2000 |     3 |  15 |    7 |     14 |      2
+    | 2000-03-15 05:14:03-05   | 2000 |     3 |  15 |    5 |     14 |      3
+    | 2000-03-15 06:14:04-05   | 2000 |     3 |  15 |    6 |     14 |      4
+    | 2000-03-15 04:14:05-05   | 2000 |     3 |  15 |    4 |     14 |      5
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 17:32:01-05   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:00-05   | 1997 |     2 |  10 |   17 |     32 |      0
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-10-02 20:32:01-05   | 1997 |    10 |   2 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 12:32:01-05   | 1997 |     2 |  10 |   12 |     32 |      1
+    | 1997-02-10 12:32:01-05   | 1997 |     2 |  10 |   12 |     32 |      1
+    | 1997-02-10 12:32:01-05   | 1997 |     2 |  10 |   12 |     32 |      1
+    | 1997-02-10 17:32:01-05   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-07-10 16:32:01-05   | 1997 |     7 |  10 |   16 |     32 |      1
+    | 1997-06-10 20:32:01-05   | 1997 |     6 |  10 |   20 |     32 |      1
+    | 1997-02-10 17:32:01-05   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-11 17:32:01-05   | 1997 |     2 |  11 |   17 |     32 |      1
+    | 1997-02-12 17:32:01-05   | 1997 |     2 |  12 |   17 |     32 |      1
+    | 1997-02-13 17:32:01-05   | 1997 |     2 |  13 |   17 |     32 |      1
+    | 1997-02-14 17:32:01-05   | 1997 |     2 |  14 |   17 |     32 |      1
+    | 1997-02-15 17:32:01-05   | 1997 |     2 |  15 |   17 |     32 |      1
+    | 1997-02-16 17:32:01-05   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1997-02-16 17:32:01-05   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1996-02-28 17:32:01-05   | 1996 |     2 |  28 |   17 |     32 |      1
+    | 1996-02-29 17:32:01-05   | 1996 |     2 |  29 |   17 |     32 |      1
+    | 1996-03-01 17:32:01-05   | 1996 |     3 |   1 |   17 |     32 |      1
+    | 1996-12-30 17:32:01-05   | 1996 |    12 |  30 |   17 |     32 |      1
+    | 1996-12-31 17:32:01-05   | 1996 |    12 |  31 |   17 |     32 |      1
+    | 1997-01-01 17:32:01-05   | 1997 |     1 |   1 |   17 |     32 |      1
+    | 1997-02-28 17:32:01-05   | 1997 |     2 |  28 |   17 |     32 |      1
+    | 1997-03-01 17:32:01-05   | 1997 |     3 |   1 |   17 |     32 |      1
+    | 1997-12-30 17:32:01-05   | 1997 |    12 |  30 |   17 |     32 |      1
+    | 1997-12-31 17:32:01-05   | 1997 |    12 |  31 |   17 |     32 |      1
+    | 1999-12-31 17:32:01-05   | 1999 |    12 |  31 |   17 |     32 |      1
+    | 2000-01-01 17:32:01-05   | 2000 |     1 |   1 |   17 |     32 |      1
+    | 2000-12-31 17:32:01-05   | 2000 |    12 |  31 |   17 |     32 |      1
+    | 2001-01-01 17:32:01-05   | 2001 |     1 |   1 |   17 |     32 |      1
 (56 rows)
 
 SELECT '' AS "54", d1 as timestamptz,
    date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
    date_part( 'usec', d1) AS usec
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |          timestamptz           | quarter | msec  |   usec   
-----+--------------------------------+---------+-------+----------
-    | Wed Dec 31 16:00:00 1969 PST   |       4 |     0 |        0
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:02 1997 PST   |       1 |  2000 |  2000000
-    | Mon Feb 10 17:32:01.4 1997 PST |       1 |  1400 |  1400000
-    | Mon Feb 10 17:32:01.5 1997 PST |       1 |  1500 |  1500000
-    | Mon Feb 10 17:32:01.6 1997 PST |       1 |  1600 |  1600000
-    | Thu Jan 02 00:00:00 1997 PST   |       1 |     0 |        0
-    | Thu Jan 02 03:04:05 1997 PST   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Tue Jun 10 17:32:01 1997 PDT   |       2 |  1000 |  1000000
-    | Sat Sep 22 18:19:20 2001 PDT   |       3 | 20000 | 20000000
-    | Wed Mar 15 08:14:01 2000 PST   |       1 |  1000 |  1000000
-    | Wed Mar 15 04:14:02 2000 PST   |       1 |  2000 |  2000000
-    | Wed Mar 15 02:14:03 2000 PST   |       1 |  3000 |  3000000
-    | Wed Mar 15 03:14:04 2000 PST   |       1 |  4000 |  4000000
-    | Wed Mar 15 01:14:05 2000 PST   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:00 1997 PST   |       1 |     0 |        0
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 09:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 09:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 09:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 14:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Thu Jul 10 14:32:01 1997 PDT   |       3 |  1000 |  1000000
-    | Tue Jun 10 18:32:01 1997 PDT   |       2 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Tue Feb 11 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Wed Feb 12 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Thu Feb 13 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Fri Feb 14 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sat Feb 15 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Wed Feb 28 17:32:01 1996 PST   |       1 |  1000 |  1000000
-    | Thu Feb 29 17:32:01 1996 PST   |       1 |  1000 |  1000000
-    | Fri Mar 01 17:32:01 1996 PST   |       1 |  1000 |  1000000
-    | Mon Dec 30 17:32:01 1996 PST   |       4 |  1000 |  1000000
-    | Tue Dec 31 17:32:01 1996 PST   |       4 |  1000 |  1000000
-    | Wed Jan 01 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Fri Feb 28 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sat Mar 01 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Tue Dec 30 17:32:01 1997 PST   |       4 |  1000 |  1000000
-    | Wed Dec 31 17:32:01 1997 PST   |       4 |  1000 |  1000000
-    | Fri Dec 31 17:32:01 1999 PST   |       4 |  1000 |  1000000
-    | Sat Jan 01 17:32:01 2000 PST   |       1 |  1000 |  1000000
-    | Sun Dec 31 17:32:01 2000 PST   |       4 |  1000 |  1000000
-    | Mon Jan 01 17:32:01 2001 PST   |       1 |  1000 |  1000000
+ 54 |       timestamptz        | quarter | msec  |   usec   
+----+--------------------------+---------+-------+----------
+    | 1969-12-31 19:00:00-05   |       4 |     0 |        0
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:02-05   |       1 |  2000 |  2000000
+    | 1997-02-10 20:32:01.4-05 |       1 |  1400 |  1400000
+    | 1997-02-10 20:32:01.5-05 |       1 |  1500 |  1500000
+    | 1997-02-10 20:32:01.6-05 |       1 |  1600 |  1600000
+    | 1997-01-02 00:00:00-05   |       1 |     0 |        0
+    | 1997-01-02 03:04:05-05   |       1 |  5000 |  5000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-06-10 19:32:01-05   |       2 |  1000 |  1000000
+    | 2001-09-22 18:19:20-05   |       3 | 20000 | 20000000
+    | 2000-03-15 11:14:01-05   |       1 |  1000 |  1000000
+    | 2000-03-15 07:14:02-05   |       1 |  2000 |  2000000
+    | 2000-03-15 05:14:03-05   |       1 |  3000 |  3000000
+    | 2000-03-15 06:14:04-05   |       1 |  4000 |  4000000
+    | 2000-03-15 04:14:05-05   |       1 |  5000 |  5000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:00-05   |       1 |     0 |        0
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-10-02 20:32:01-05   |       4 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 12:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 12:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 12:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-07-10 16:32:01-05   |       3 |  1000 |  1000000
+    | 1997-06-10 20:32:01-05   |       2 |  1000 |  1000000
+    | 1997-02-10 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-11 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-12 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-13 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-14 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-15 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-02-28 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-02-29 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-03-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-12-30 17:32:01-05   |       4 |  1000 |  1000000
+    | 1996-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 1997-01-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-28 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-03-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-12-30 17:32:01-05   |       4 |  1000 |  1000000
+    | 1997-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 1999-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 2000-01-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 2000-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 2001-01-01 17:32:01-05   |       1 |  1000 |  1000000
 (56 rows)
 
 SELECT '' AS "54", d1 as timestamptz,
    date_part( 'isoyear', d1) AS isoyear, date_part( 'week', d1) AS week,
    date_part( 'dow', d1) AS dow
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |          timestamptz           | isoyear | week | dow 
-----+--------------------------------+---------+------+-----
-    | Wed Dec 31 16:00:00 1969 PST   |    1970 |    1 |   3
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:02 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.4 1997 PST |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.5 1997 PST |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.6 1997 PST |    1997 |    7 |   1
-    | Thu Jan 02 00:00:00 1997 PST   |    1997 |    1 |   4
-    | Thu Jan 02 03:04:05 1997 PST   |    1997 |    1 |   4
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Tue Jun 10 17:32:01 1997 PDT   |    1997 |   24 |   2
-    | Sat Sep 22 18:19:20 2001 PDT   |    2001 |   38 |   6
-    | Wed Mar 15 08:14:01 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 04:14:02 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 02:14:03 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 03:14:04 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 01:14:05 2000 PST   |    2000 |   11 |   3
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:00 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 09:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 09:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 09:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 14:32:01 1997 PST   |    1997 |    7 |   1
-    | Thu Jul 10 14:32:01 1997 PDT   |    1997 |   28 |   4
-    | Tue Jun 10 18:32:01 1997 PDT   |    1997 |   24 |   2
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Tue Feb 11 17:32:01 1997 PST   |    1997 |    7 |   2
-    | Wed Feb 12 17:32:01 1997 PST   |    1997 |    7 |   3
-    | Thu Feb 13 17:32:01 1997 PST   |    1997 |    7 |   4
-    | Fri Feb 14 17:32:01 1997 PST   |    1997 |    7 |   5
-    | Sat Feb 15 17:32:01 1997 PST   |    1997 |    7 |   6
-    | Sun Feb 16 17:32:01 1997 PST   |    1997 |    7 |   0
-    | Sun Feb 16 17:32:01 1997 PST   |    1997 |    7 |   0
-    | Wed Feb 28 17:32:01 1996 PST   |    1996 |    9 |   3
-    | Thu Feb 29 17:32:01 1996 PST   |    1996 |    9 |   4
-    | Fri Mar 01 17:32:01 1996 PST   |    1996 |    9 |   5
-    | Mon Dec 30 17:32:01 1996 PST   |    1997 |    1 |   1
-    | Tue Dec 31 17:32:01 1996 PST   |    1997 |    1 |   2
-    | Wed Jan 01 17:32:01 1997 PST   |    1997 |    1 |   3
-    | Fri Feb 28 17:32:01 1997 PST   |    1997 |    9 |   5
-    | Sat Mar 01 17:32:01 1997 PST   |    1997 |    9 |   6
-    | Tue Dec 30 17:32:01 1997 PST   |    1998 |    1 |   2
-    | Wed Dec 31 17:32:01 1997 PST   |    1998 |    1 |   3
-    | Fri Dec 31 17:32:01 1999 PST   |    1999 |   52 |   5
-    | Sat Jan 01 17:32:01 2000 PST   |    1999 |   52 |   6
-    | Sun Dec 31 17:32:01 2000 PST   |    2000 |   52 |   0
-    | Mon Jan 01 17:32:01 2001 PST   |    2001 |    1 |   1
+ 54 |       timestamptz        | isoyear | week | dow 
+----+--------------------------+---------+------+-----
+    | 1969-12-31 19:00:00-05   |    1970 |    1 |   3
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:02-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01.4-05 |    1997 |    7 |   1
+    | 1997-02-10 20:32:01.5-05 |    1997 |    7 |   1
+    | 1997-02-10 20:32:01.6-05 |    1997 |    7 |   1
+    | 1997-01-02 00:00:00-05   |    1997 |    1 |   4
+    | 1997-01-02 03:04:05-05   |    1997 |    1 |   4
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-06-10 19:32:01-05   |    1997 |   24 |   2
+    | 2001-09-22 18:19:20-05   |    2001 |   38 |   6
+    | 2000-03-15 11:14:01-05   |    2000 |   11 |   3
+    | 2000-03-15 07:14:02-05   |    2000 |   11 |   3
+    | 2000-03-15 05:14:03-05   |    2000 |   11 |   3
+    | 2000-03-15 06:14:04-05   |    2000 |   11 |   3
+    | 2000-03-15 04:14:05-05   |    2000 |   11 |   3
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 17:32:00-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-10-02 20:32:01-05   |    1997 |   40 |   4
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 12:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 12:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 12:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01-05   |    1997 |    7 |   1
+    | 1997-07-10 16:32:01-05   |    1997 |   28 |   4
+    | 1997-06-10 20:32:01-05   |    1997 |   24 |   2
+    | 1997-02-10 17:32:01-05   |    1997 |    7 |   1
+    | 1997-02-11 17:32:01-05   |    1997 |    7 |   2
+    | 1997-02-12 17:32:01-05   |    1997 |    7 |   3
+    | 1997-02-13 17:32:01-05   |    1997 |    7 |   4
+    | 1997-02-14 17:32:01-05   |    1997 |    7 |   5
+    | 1997-02-15 17:32:01-05   |    1997 |    7 |   6
+    | 1997-02-16 17:32:01-05   |    1997 |    7 |   0
+    | 1997-02-16 17:32:01-05   |    1997 |    7 |   0
+    | 1996-02-28 17:32:01-05   |    1996 |    9 |   3
+    | 1996-02-29 17:32:01-05   |    1996 |    9 |   4
+    | 1996-03-01 17:32:01-05   |    1996 |    9 |   5
+    | 1996-12-30 17:32:01-05   |    1997 |    1 |   1
+    | 1996-12-31 17:32:01-05   |    1997 |    1 |   2
+    | 1997-01-01 17:32:01-05   |    1997 |    1 |   3
+    | 1997-02-28 17:32:01-05   |    1997 |    9 |   5
+    | 1997-03-01 17:32:01-05   |    1997 |    9 |   6
+    | 1997-12-30 17:32:01-05   |    1998 |    1 |   2
+    | 1997-12-31 17:32:01-05   |    1998 |    1 |   3
+    | 1999-12-31 17:32:01-05   |    1999 |   52 |   5
+    | 2000-01-01 17:32:01-05   |    1999 |   52 |   6
+    | 2000-12-31 17:32:01-05   |    2000 |   52 |   0
+    | 2001-01-01 17:32:01-05   |    2001 |    1 |   1
 (56 rows)
 
 -- TO_CHAR()
@@ -944,7 +944,7 @@
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
-           | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
+           | THURSDAY  Thursday  thursday  THU Thu thu OCTOBER   October   october   X    OCT Oct oct
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
@@ -1016,7 +1016,7 @@
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
-           | MONDAY Monday monday FEBRUARY February february II
+           | THURSDAY Thursday thursday OCTOBER October october X
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
@@ -1088,7 +1088,7 @@
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 02 5 2450724
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
@@ -1160,7 +1160,7 @@
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 2 5 2450724
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
@@ -1206,40 +1206,40 @@
 -----------+----------------------
            | 
            | 
-           | 04 04 16 00 00 57600
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 02 63122
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
+           | 07 07 19 00 00 68400
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 02 73922
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
            | 12 12 00 00 00 0
            | 03 03 03 04 05 11045
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 07 07 19 32 01 70321
            | 06 06 18 19 20 65960
-           | 08 08 08 14 01 29641
-           | 04 04 04 14 02 15242
-           | 02 02 02 14 03 8043
-           | 03 03 03 14 04 11644
-           | 01 01 01 14 05 4445
-           | 05 05 17 32 01 63121
+           | 11 11 11 14 01 40441
+           | 07 07 07 14 02 26042
+           | 05 05 05 14 03 18843
+           | 06 06 06 14 04 22444
+           | 04 04 04 14 05 15245
+           | 08 08 20 32 01 73921
            | 05 05 17 32 01 63121
            | 05 05 17 32 00 63120
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 12 12 12 32 01 45121
+           | 12 12 12 32 01 45121
+           | 12 12 12 32 01 45121
            | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 09 09 09 32 01 34321
-           | 09 09 09 32 01 34321
-           | 09 09 09 32 01 34321
-           | 02 02 14 32 01 52321
-           | 02 02 14 32 01 52321
-           | 06 06 18 32 01 66721
+           | 04 04 16 32 01 59521
+           | 08 08 20 32 01 73921
            | 05 05 17 32 01 63121
            | 05 05 17 32 01 63121
            | 05 05 17 32 01 63121
@@ -1278,40 +1278,40 @@
 -----------+-------------------------------------------------
            | 
            | 
-           | HH:MI:SS is 04:00:00 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:02 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
+           | HH:MI:SS is 07:00:00 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:02 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
            | HH:MI:SS is 12:00:00 "text between quote marks"
            | HH:MI:SS is 03:04:05 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 07:32:01 "text between quote marks"
            | HH:MI:SS is 06:19:20 "text between quote marks"
-           | HH:MI:SS is 08:14:01 "text between quote marks"
-           | HH:MI:SS is 04:14:02 "text between quote marks"
-           | HH:MI:SS is 02:14:03 "text between quote marks"
-           | HH:MI:SS is 03:14:04 "text between quote marks"
-           | HH:MI:SS is 01:14:05 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
+           | HH:MI:SS is 11:14:01 "text between quote marks"
+           | HH:MI:SS is 07:14:02 "text between quote marks"
+           | HH:MI:SS is 05:14:03 "text between quote marks"
+           | HH:MI:SS is 06:14:04 "text between quote marks"
+           | HH:MI:SS is 04:14:05 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:00 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 12:32:01 "text between quote marks"
+           | HH:MI:SS is 12:32:01 "text between quote marks"
+           | HH:MI:SS is 12:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 09:32:01 "text between quote marks"
-           | HH:MI:SS is 09:32:01 "text between quote marks"
-           | HH:MI:SS is 09:32:01 "text between quote marks"
-           | HH:MI:SS is 02:32:01 "text between quote marks"
-           | HH:MI:SS is 02:32:01 "text between quote marks"
-           | HH:MI:SS is 06:32:01 "text between quote marks"
+           | HH:MI:SS is 04:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
@@ -1350,40 +1350,40 @@
 -----------+------------------------
            | 
            | 
-           | 16--text--00--text--00
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--02
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
+           | 19--text--00--text--00
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--02
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
            | 00--text--00--text--00
            | 03--text--04--text--05
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 19--text--32--text--01
            | 18--text--19--text--20
-           | 08--text--14--text--01
-           | 04--text--14--text--02
-           | 02--text--14--text--03
-           | 03--text--14--text--04
-           | 01--text--14--text--05
-           | 17--text--32--text--01
+           | 11--text--14--text--01
+           | 07--text--14--text--02
+           | 05--text--14--text--03
+           | 06--text--14--text--04
+           | 04--text--14--text--05
+           | 20--text--32--text--01
            | 17--text--32--text--01
            | 17--text--32--text--00
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 12--text--32--text--01
+           | 12--text--32--text--01
+           | 12--text--32--text--01
            | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 09--text--32--text--01
-           | 09--text--32--text--01
-           | 09--text--32--text--01
-           | 14--text--32--text--01
-           | 14--text--32--text--01
-           | 18--text--32--text--01
+           | 16--text--32--text--01
+           | 20--text--32--text--01
            | 17--text--32--text--01
            | 17--text--32--text--01
            | 17--text--32--text--01
@@ -1448,7 +1448,7 @@
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
-           | 1997TH 1997th 2450490th
+           | 1997TH 1997th 2450724th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
@@ -1494,40 +1494,40 @@
 -----------+---------------------------------------------------------------------
            | 
            | 
-           | 1969 A.D. 1969 a.d. 1969 ad 04:00:00 P.M. 04:00:00 p.m. 04:00:00 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:02 P.M. 05:32:02 p.m. 05:32:02 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
+           | 1969 A.D. 1969 a.d. 1969 ad 07:00:00 P.M. 07:00:00 p.m. 07:00:00 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:02 P.M. 08:32:02 p.m. 08:32:02 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 12:00:00 A.M. 12:00:00 a.m. 12:00:00 am
            | 1997 A.D. 1997 a.d. 1997 ad 03:04:05 A.M. 03:04:05 a.m. 03:04:05 am
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 07:32:01 P.M. 07:32:01 p.m. 07:32:01 pm
            | 2001 A.D. 2001 a.d. 2001 ad 06:19:20 P.M. 06:19:20 p.m. 06:19:20 pm
-           | 2000 A.D. 2000 a.d. 2000 ad 08:14:01 A.M. 08:14:01 a.m. 08:14:01 am
-           | 2000 A.D. 2000 a.d. 2000 ad 04:14:02 A.M. 04:14:02 a.m. 04:14:02 am
-           | 2000 A.D. 2000 a.d. 2000 ad 02:14:03 A.M. 02:14:03 a.m. 02:14:03 am
-           | 2000 A.D. 2000 a.d. 2000 ad 03:14:04 A.M. 03:14:04 a.m. 03:14:04 am
-           | 2000 A.D. 2000 a.d. 2000 ad 01:14:05 A.M. 01:14:05 a.m. 01:14:05 am
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
+           | 2000 A.D. 2000 a.d. 2000 ad 11:14:01 A.M. 11:14:01 a.m. 11:14:01 am
+           | 2000 A.D. 2000 a.d. 2000 ad 07:14:02 A.M. 07:14:02 a.m. 07:14:02 am
+           | 2000 A.D. 2000 a.d. 2000 ad 05:14:03 A.M. 05:14:03 a.m. 05:14:03 am
+           | 2000 A.D. 2000 a.d. 2000 ad 06:14:04 A.M. 06:14:04 a.m. 06:14:04 am
+           | 2000 A.D. 2000 a.d. 2000 ad 04:14:05 A.M. 04:14:05 a.m. 04:14:05 am
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:00 P.M. 05:32:00 p.m. 05:32:00 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 12:32:01 P.M. 12:32:01 p.m. 12:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 12:32:01 P.M. 12:32:01 p.m. 12:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 12:32:01 P.M. 12:32:01 p.m. 12:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 09:32:01 A.M. 09:32:01 a.m. 09:32:01 am
-           | 1997 A.D. 1997 a.d. 1997 ad 09:32:01 A.M. 09:32:01 a.m. 09:32:01 am
-           | 1997 A.D. 1997 a.d. 1997 ad 09:32:01 A.M. 09:32:01 a.m. 09:32:01 am
-           | 1997 A.D. 1997 a.d. 1997 ad 02:32:01 P.M. 02:32:01 p.m. 02:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 02:32:01 P.M. 02:32:01 p.m. 02:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 06:32:01 P.M. 06:32:01 p.m. 06:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 04:32:01 P.M. 04:32:01 p.m. 04:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
@@ -1592,7 +1592,7 @@
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
-            | 1997 997 97 7 07 043 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
@@ -1664,7 +1664,7 @@
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
-            | 1997 997 97 7 7 43 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
@@ -1779,14 +1779,14 @@
 INSERT INTO TIMESTAMPTZ_TST VALUES(4, '1000000312 23:58:48 IST');
 --Verify data
 SELECT * FROM TIMESTAMPTZ_TST ORDER BY a;
- a |               b                
----+--------------------------------
- 1 | Wed Mar 12 13:58:48 1000 PST
- 2 | Sun Mar 12 14:58:48 10000 PDT
- 3 | Sun Mar 12 14:58:48 100000 PDT
- 3 | Sun Mar 12 14:58:48 10000 PDT
- 4 | Sun Mar 12 14:58:48 10000 PDT
- 4 | Sun Mar 12 14:58:48 100000 PDT
+ a |              b               
+---+------------------------------
+ 1 | 1000-03-12 16:39:28-05:19:20
+ 2 | 10000-03-12 16:58:48-05
+ 3 | 100000-03-12 16:58:48-05
+ 3 | 10000-03-12 16:58:48-05
+ 4 | 10000-03-12 16:58:48-05
+ 4 | 100000-03-12 16:58:48-05
 (6 rows)
 
 --Cleanup
@@ -1795,21 +1795,21 @@
 set TimeZone to 'America/New_York';
 -- numeric timezone
 SELECT make_timestamptz(1973, 07, 15, 08, 15, 55.33);
-        make_timestamptz         
----------------------------------
- Sun Jul 15 08:15:55.33 1973 EDT
+     make_timestamptz      
+---------------------------
+ 1973-07-15 08:15:55.33-04
 (1 row)
 
 SELECT make_timestamptz(1973, 07, 15, 08, 15, 55.33, '+2');
-        make_timestamptz         
----------------------------------
- Sun Jul 15 02:15:55.33 1973 EDT
+     make_timestamptz      
+---------------------------
+ 1973-07-15 02:15:55.33-04
 (1 row)
 
 SELECT make_timestamptz(1973, 07, 15, 08, 15, 55.33, '-2');
-        make_timestamptz         
----------------------------------
- Sun Jul 15 06:15:55.33 1973 EDT
+     make_timestamptz      
+---------------------------
+ 1973-07-15 06:15:55.33-04
 (1 row)
 
 WITH tzs (tz) AS (VALUES
@@ -1818,23 +1818,23 @@
     ('+10:00:1'), ('+10:00:01'),
     ('+10:00:10'))
      SELECT make_timestamptz(2010, 2, 27, 3, 45, 00, tz), tz FROM tzs;
-       make_timestamptz       |    tz     
-------------------------------+-----------
- Fri Feb 26 21:45:00 2010 EST | +1
- Fri Feb 26 21:45:00 2010 EST | +1:
- Fri Feb 26 21:45:00 2010 EST | +1:0
- Fri Feb 26 21:45:00 2010 EST | +100
- Fri Feb 26 21:45:00 2010 EST | +1:00
- Fri Feb 26 21:45:00 2010 EST | +01:00
- Fri Feb 26 12:45:00 2010 EST | +10
- Fri Feb 26 12:45:00 2010 EST | +1000
- Fri Feb 26 12:45:00 2010 EST | +10:
- Fri Feb 26 12:45:00 2010 EST | +10:0
- Fri Feb 26 12:45:00 2010 EST | +10:00
- Fri Feb 26 12:45:00 2010 EST | +10:00:
- Fri Feb 26 12:44:59 2010 EST | +10:00:1
- Fri Feb 26 12:44:59 2010 EST | +10:00:01
- Fri Feb 26 12:44:50 2010 EST | +10:00:10
+    make_timestamptz    |    tz     
+------------------------+-----------
+ 2010-02-26 21:45:00-05 | +1
+ 2010-02-26 21:45:00-05 | +1:
+ 2010-02-26 21:45:00-05 | +1:0
+ 2010-02-26 21:45:00-05 | +100
+ 2010-02-26 21:45:00-05 | +1:00
+ 2010-02-26 21:45:00-05 | +01:00
+ 2010-02-26 12:45:00-05 | +10
+ 2010-02-26 12:45:00-05 | +1000
+ 2010-02-26 12:45:00-05 | +10:
+ 2010-02-26 12:45:00-05 | +10:0
+ 2010-02-26 12:45:00-05 | +10:00
+ 2010-02-26 12:45:00-05 | +10:00:
+ 2010-02-26 12:44:59-05 | +10:00:1
+ 2010-02-26 12:44:59-05 | +10:00:01
+ 2010-02-26 12:44:50-05 | +10:00:10
 (15 rows)
 
 -- these should fail
@@ -1860,42 +1860,42 @@
 (1 row)
 
 SELECT make_timestamptz(2014, 12, 10, 0, 0, 0, 'Europe/Prague') AT TIME ZONE 'UTC';
-         timezone         
---------------------------
- Tue Dec 09 23:00:00 2014
+      timezone       
+---------------------
+ 2014-12-09 23:00:00
 (1 row)
 
 SELECT make_timestamptz(1846, 12, 10, 0, 0, 0, 'Asia/Manila') AT TIME ZONE 'UTC';
-         timezone         
---------------------------
- Wed Dec 09 15:56:00 1846
+      timezone       
+---------------------
+ 1846-12-09 15:56:00
 (1 row)
 
 SELECT make_timestamptz(1881, 12, 10, 0, 0, 0, 'Europe/Paris') AT TIME ZONE 'UTC';
-         timezone         
---------------------------
- Fri Dec 09 23:50:39 1881
+      timezone       
+---------------------
+ 1881-12-09 23:50:39
 (1 row)
 
 SELECT make_timestamptz(1910, 12, 24, 0, 0, 0, 'Nehwon/Lankhmar');
 ERROR:  time zone "Nehwon/Lankhmar" not recognized
 -- abbreviations
 SELECT make_timestamptz(2008, 12, 10, 10, 10, 10, 'EST');
-       make_timestamptz       
-------------------------------
- Wed Dec 10 10:10:10 2008 EST
+    make_timestamptz    
+------------------------
+ 2008-12-10 10:10:10-05
 (1 row)
 
 SELECT make_timestamptz(2008, 12, 10, 10, 10, 10, 'EDT');
-       make_timestamptz       
-------------------------------
- Wed Dec 10 09:10:10 2008 EST
+    make_timestamptz    
+------------------------
+ 2008-12-10 09:10:10-05
 (1 row)
 
 SELECT make_timestamptz(2014, 12, 10, 10, 10, 10, 'PST8PDT');
-       make_timestamptz       
-------------------------------
- Wed Dec 10 13:10:10 2014 EST
+    make_timestamptz    
+------------------------
+ 2014-12-10 13:10:10-05
 (1 row)
 
 RESET TimeZone;
@@ -1906,376 +1906,376 @@
 --
 SET TimeZone to 'UTC';
 SELECT '2011-03-27 00:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2011-03-27 00:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 00:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2011-03-27 00:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT make_timestamptz(2014, 10, 26, 0, 0, 0, 'MSK');
-       make_timestamptz       
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+    make_timestamptz    
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT make_timestamptz(2014, 10, 26, 1, 0, 0, 'MSK');
-       make_timestamptz       
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+    make_timestamptz    
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT to_timestamp(         0);          -- 1970-01-01 00:00:00+00
-         to_timestamp         
-------------------------------
- Thu Jan 01 00:00:00 1970 UTC
+      to_timestamp      
+------------------------
+ 1970-01-01 00:00:00+00
 (1 row)
 
 SELECT to_timestamp( 946684800);          -- 2000-01-01 00:00:00+00
-         to_timestamp         
-------------------------------
- Sat Jan 01 00:00:00 2000 UTC
+      to_timestamp      
+------------------------
+ 2000-01-01 00:00:00+00
 (1 row)
 
 SELECT to_timestamp(1262349296.7890123);  -- 2010-01-01 12:34:56.789012+00
-            to_timestamp             
--------------------------------------
- Fri Jan 01 12:34:56.789012 2010 UTC
+         to_timestamp          
+-------------------------------
+ 2010-01-01 12:34:56.789012+00
 (1 row)
 
 -- edge cases
 SELECT to_timestamp(-210866803200);       --   4714-11-24 00:00:00+00 BC
-          to_timestamp           
----------------------------------
- Mon Nov 24 00:00:00 4714 UTC BC
+       to_timestamp        
+---------------------------
+ 4714-11-24 00:00:00+00 BC
 (1 row)
 
 -- upper limit varies between integer and float timestamps, so hard to test
@@ -2296,220 +2296,220 @@
 ERROR:  timestamp cannot be NaN
 SET TimeZone to 'Europe/Moscow';
 SELECT '2011-03-26 21:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 00:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 00:00:00+03
 (1 row)
 
 SELECT '2011-03-26 22:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 01:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 01:00:00+03
 (1 row)
 
 SELECT '2011-03-26 22:59:59 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 01:59:59 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 01:59:59+03
 (1 row)
 
 SELECT '2011-03-26 23:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 03:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 03:00:00+04
 (1 row)
 
 SELECT '2011-03-26 23:00:01 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 03:00:01 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 03:00:01+04
 (1 row)
 
 SELECT '2011-03-26 23:59:59 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 03:59:59 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 03:59:59+04
 (1 row)
 
 SELECT '2011-03-27 00:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 04:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 04:00:00+04
 (1 row)
 
 SELECT '2014-10-25 21:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:00:00 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:00:00+04
 (1 row)
 
 SELECT '2014-10-25 21:59:59 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:59:59 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:59:59+04
 (1 row)
 
 SELECT '2014-10-25 22:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:00:00 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:00:00+03
 (1 row)
 
 SELECT '2014-10-25 22:00:01 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:00:01 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:00:01+03
 (1 row)
 
 SELECT '2014-10-25 23:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 02:00:00 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 02:00:00+03
 (1 row)
 
 RESET TimeZone;
 SELECT '2011-03-26 21:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 00:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 00:00:00
 (1 row)
 
 SELECT '2011-03-26 22:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 01:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 01:00:00
 (1 row)
 
 SELECT '2011-03-26 22:59:59 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 01:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 01:59:59
 (1 row)
 
 SELECT '2011-03-26 23:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 03:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:00
 (1 row)
 
 SELECT '2011-03-26 23:00:01 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 03:00:01 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:01
 (1 row)
 
 SELECT '2011-03-26 23:59:59 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 03:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 03:59:59
 (1 row)
 
 SELECT '2011-03-27 00:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 04:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 04:00:00
 (1 row)
 
 SELECT '2014-10-25 21:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 21:59:59 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:59:59 2014
+      timezone       
+---------------------
+ 2014-10-26 01:59:59
 (1 row)
 
 SELECT '2014-10-25 22:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 22:00:01 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:00:01 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:01
 (1 row)
 
 SELECT '2014-10-25 23:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 02:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 02:00:00
 (1 row)
 
 SELECT '2011-03-26 21:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 00:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 00:00:00
 (1 row)
 
 SELECT '2011-03-26 22:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 01:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 01:00:00
 (1 row)
 
 SELECT '2011-03-26 22:59:59 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 01:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 01:59:59
 (1 row)
 
 SELECT '2011-03-26 23:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 03:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:00
 (1 row)
 
 SELECT '2011-03-26 23:00:01 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 03:00:01 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:01
 (1 row)
 
 SELECT '2011-03-26 23:59:59 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 03:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 03:59:59
 (1 row)
 
 SELECT '2011-03-27 00:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 04:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 04:00:00
 (1 row)
 
 SELECT '2014-10-25 21:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 21:59:59 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:59:59 2014
+      timezone       
+---------------------
+ 2014-10-26 01:59:59
 (1 row)
 
 SELECT '2014-10-25 22:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 22:00:01 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:00:01 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:01
 (1 row)
 
 SELECT '2014-10-25 23:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 02:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 02:00:00
 (1 row)
 
 --
@@ -2519,15 +2519,15 @@
 insert into tmptz values ('2017-01-18 00:00+00');
 explain (costs off)
 select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
-                                           QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
+                                         QUERY PLAN                                         
+--------------------------------------------------------------------------------------------
  Seq Scan on tmptz
-   Filter: (timezone('utc'::text, f1) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
+   Filter: (timezone('utc'::text, f1) = '2017-01-18 00:00:00'::timestamp without time zone)
 (2 rows)
 
 select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
-              f1              
-------------------------------
- Tue Jan 17 16:00:00 2017 PST
+           f1           
+------------------------
+ 2017-01-17 19:00:00-05
 (1 row)
 
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/horology.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/horology.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/horology.out	2019-08-12 14:55:05.430230622 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/horology.out	2019-09-05 16:27:41.191765820 -0500
@@ -8,73 +8,73 @@
 SELECT timestamp with time zone '20011227 040506+08';
          timestamptz          
 ------------------------------
- Wed Dec 26 12:05:06 2001 PST
+ Wed Dec 26 15:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227 040506-08';
          timestamptz          
 ------------------------------
- Thu Dec 27 04:05:06 2001 PST
+ Thu Dec 27 07:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227 040506.789+08';
            timestamptz            
 ----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+ Wed Dec 26 15:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227 040506.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506+08';
          timestamptz          
 ------------------------------
- Wed Dec 26 12:05:06 2001 PST
+ Wed Dec 26 15:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506-08';
          timestamptz          
 ------------------------------
- Thu Dec 27 04:05:06 2001 PST
+ Thu Dec 27 07:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506.789+08';
            timestamptz            
 ----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+ Wed Dec 26 15:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '2001-12-27 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '2001.12.27 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '2001/12/27 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '12/27/2001 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 -- should fail in mdy mode:
@@ -87,118 +87,116 @@
 SELECT timestamp with time zone '27/12/2001 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu 27 Dec 04:05:06.789 2001 PST
+ Thu 27 Dec 07:05:06.789 2001 -05
 (1 row)
 
 reset datestyle;
 SELECT timestamp with time zone 'Y2001M12D27H04M05S06.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-26 15:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'Y2001M12D27H04M05S06.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-27 07:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'Y2001M12D27H04MM05S06.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-26 15:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'Y2001M12D27H04MM05S06.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-27 07:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 08:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 11:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 00:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 03:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271.5+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 20:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 23:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271.5-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 12:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 15:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271 04:05:06+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 12:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 15:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271 04:05:06-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 04:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 07:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 12:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 15:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 04:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 07:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-26 15:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-27 07:05:06.789-05
 (1 row)
 
 -- German/European-style dates with periods as delimiters
 SELECT timestamp with time zone '12.27.2001 04:05:06.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
-(1 row)
-
+ERROR:  date/time field value out of range: "12.27.2001 04:05:06.789+08"
+LINE 1: SELECT timestamp with time zone '12.27.2001 04:05:06.789+08'...
+                                        ^
+HINT:  Perhaps you need a different "datestyle" setting.
 SELECT timestamp with time zone '12.27.2001 04:05:06.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
-(1 row)
-
+ERROR:  date/time field value out of range: "12.27.2001 04:05:06.789-08"
+LINE 1: SELECT timestamp with time zone '12.27.2001 04:05:06.789-08'...
+                                        ^
+HINT:  Perhaps you need a different "datestyle" setting.
 SET DateStyle = 'German';
 SELECT timestamp with time zone '27.12.2001 04:05:06.789+08';
          timestamptz         
 -----------------------------
- 26.12.2001 12:05:06.789 PST
+ 26.12.2001 15:05:06.789 -05
 (1 row)
 
 SELECT timestamp with time zone '27.12.2001 04:05:06.789-08';
          timestamptz         
 -----------------------------
- 27.12.2001 04:05:06.789 PST
+ 27.12.2001 07:05:06.789 -05
 (1 row)
 
 SET DateStyle = 'ISO';
@@ -289,13 +287,13 @@
 SELECT date '1991-02-03' + time with time zone '04:05:06 PST' AS "Date + Time PST";
        Date + Time PST        
 ------------------------------
- Sun Feb 03 04:05:06 1991 PST
+ Sun Feb 03 07:05:06 1991 -05
 (1 row)
 
 SELECT date '2001-02-03' + time with time zone '04:05:06 UTC' AS "Date + Time UTC";
        Date + Time UTC        
 ------------------------------
- Fri Feb 02 20:05:06 2001 PST
+ Fri Feb 02 23:05:06 2001 -05
 (1 row)
 
 SELECT date '1991-02-03' + interval '2 years' AS "Add Two Years";
@@ -368,9 +366,9 @@
 (1 row)
 
 SELECT timestamp without time zone '12/31/294276' - timestamp without time zone '12/23/1999' AS "106751991 Days";
-  106751991 Days  
-------------------
- @ 106751991 days
+ 106751991 Days 
+----------------
+ 106751991 days
 (1 row)
 
 -- Shorthand values
@@ -454,13 +452,13 @@
 SELECT date '1994-01-01' + timetz '11:00-5' AS "Jan_01_1994_8am";
        Jan_01_1994_8am        
 ------------------------------
- Sat Jan 01 08:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '11:00-5') AS "Jan_01_1994_8am";
        Jan_01_1994_8am        
 ------------------------------
- Sat Jan 01 08:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT '' AS "64", d1 + interval '1 year' AS one_year FROM TIMESTAMP_TBL;
@@ -494,7 +492,7 @@
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
-    | Tue Feb 10 17:32:01 1998
+    | Fri Oct 02 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
@@ -564,7 +562,7 @@
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
-    | Sat Feb 10 17:32:01 1996
+    | Wed Oct 02 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
@@ -606,25 +604,25 @@
 SELECT timestamp with time zone '1996-03-01' - interval '1 second' AS "Feb 29";
             Feb 29            
 ------------------------------
- Thu Feb 29 23:59:59 1996 PST
+ Thu Feb 29 23:59:59 1996 -05
 (1 row)
 
 SELECT timestamp with time zone '1999-03-01' - interval '1 second' AS "Feb 28";
             Feb 28            
 ------------------------------
- Sun Feb 28 23:59:59 1999 PST
+ Sun Feb 28 23:59:59 1999 -05
 (1 row)
 
 SELECT timestamp with time zone '2000-03-01' - interval '1 second' AS "Feb 29";
             Feb 29            
 ------------------------------
- Tue Feb 29 23:59:59 2000 PST
+ Tue Feb 29 23:59:59 2000 -05
 (1 row)
 
 SELECT timestamp with time zone '1999-12-01' + interval '1 month - 1 second' AS "Dec 31";
             Dec 31            
 ------------------------------
- Fri Dec 31 23:59:59 1999 PST
+ Fri Dec 31 23:59:59 1999 -05
 (1 row)
 
 SELECT (timestamp with time zone 'today' = (timestamp with time zone 'yesterday' + interval '1 day')) as "True";
@@ -681,31 +679,31 @@
 SELECT timestamptz(date '1994-01-01', time '11:00') AS "Jan_01_1994_10am";
        Jan_01_1994_10am       
 ------------------------------
- Sat Jan 01 11:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time '10:00') AS "Jan_01_1994_9am";
        Jan_01_1994_9am        
 ------------------------------
- Sat Jan 01 10:00:00 1994 PST
+ Sat Jan 01 10:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '11:00-8') AS "Jan_01_1994_11am";
        Jan_01_1994_11am       
 ------------------------------
- Sat Jan 01 11:00:00 1994 PST
+ Sat Jan 01 14:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '10:00-8') AS "Jan_01_1994_10am";
        Jan_01_1994_10am       
 ------------------------------
- Sat Jan 01 10:00:00 1994 PST
+ Sat Jan 01 13:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '11:00-5') AS "Jan_01_1994_8am";
        Jan_01_1994_8am        
 ------------------------------
- Sat Jan 01 08:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT '' AS "64", d1 + interval '1 year' AS one_year FROM TIMESTAMPTZ_TBL;
@@ -713,70 +711,70 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Thu Dec 31 16:00:00 1970 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:02 1998 PST
-    | Tue Feb 10 17:32:01.4 1998 PST
-    | Tue Feb 10 17:32:01.5 1998 PST
-    | Tue Feb 10 17:32:01.6 1998 PST
-    | Fri Jan 02 00:00:00 1998 PST
-    | Fri Jan 02 03:04:05 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Wed Jun 10 17:32:01 1998 PDT
-    | Sun Sep 22 18:19:20 2002 PDT
-    | Thu Mar 15 08:14:01 2001 PST
-    | Thu Mar 15 04:14:02 2001 PST
-    | Thu Mar 15 02:14:03 2001 PST
-    | Thu Mar 15 03:14:04 2001 PST
-    | Thu Mar 15 01:14:05 2001 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:00 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 09:32:01 1998 PST
-    | Tue Feb 10 09:32:01 1998 PST
-    | Tue Feb 10 09:32:01 1998 PST
-    | Tue Feb 10 14:32:01 1998 PST
-    | Fri Jul 10 14:32:01 1998 PDT
-    | Wed Jun 10 18:32:01 1998 PDT
-    | Tue Feb 10 17:32:01 1998 PST
-    | Wed Feb 11 17:32:01 1998 PST
-    | Thu Feb 12 17:32:01 1998 PST
-    | Fri Feb 13 17:32:01 1998 PST
-    | Sat Feb 14 17:32:01 1998 PST
-    | Sun Feb 15 17:32:01 1998 PST
-    | Mon Feb 16 17:32:01 1998 PST
-    | Thu Feb 16 17:32:01 0096 PST BC
-    | Sun Feb 16 17:32:01 0098 PST
-    | Fri Feb 16 17:32:01 0598 PST
-    | Wed Feb 16 17:32:01 1098 PST
-    | Sun Feb 16 17:32:01 1698 PST
-    | Fri Feb 16 17:32:01 1798 PST
-    | Wed Feb 16 17:32:01 1898 PST
-    | Mon Feb 16 17:32:01 1998 PST
-    | Sun Feb 16 17:32:01 2098 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Thu Jan 01 17:32:01 1998 PST
-    | Sat Feb 28 17:32:01 1998 PST
-    | Sun Mar 01 17:32:01 1998 PST
-    | Wed Dec 30 17:32:01 1998 PST
-    | Thu Dec 31 17:32:01 1998 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
-    | Mon Dec 31 17:32:01 2001 PST
-    | Tue Jan 01 17:32:01 2002 PST
+    | Thu Dec 31 19:00:00 1970 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:02 1998 -05
+    | Tue Feb 10 20:32:01.4 1998 -05
+    | Tue Feb 10 20:32:01.5 1998 -05
+    | Tue Feb 10 20:32:01.6 1998 -05
+    | Fri Jan 02 00:00:00 1998 -05
+    | Fri Jan 02 03:04:05 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Wed Jun 10 19:32:01 1998 -05
+    | Sun Sep 22 18:19:20 2002 -05
+    | Thu Mar 15 11:14:01 2001 -05
+    | Thu Mar 15 07:14:02 2001 -05
+    | Thu Mar 15 05:14:03 2001 -05
+    | Thu Mar 15 06:14:04 2001 -05
+    | Thu Mar 15 04:14:05 2001 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 17:32:01 1998 -05
+    | Tue Feb 10 17:32:00 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Fri Oct 02 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 12:32:01 1998 -05
+    | Tue Feb 10 12:32:01 1998 -05
+    | Tue Feb 10 12:32:01 1998 -05
+    | Tue Feb 10 17:32:01 1998 -05
+    | Fri Jul 10 16:32:01 1998 -05
+    | Wed Jun 10 20:32:01 1998 -05
+    | Tue Feb 10 17:32:01 1998 -05
+    | Wed Feb 11 17:32:01 1998 -05
+    | Thu Feb 12 17:32:01 1998 -05
+    | Fri Feb 13 17:32:01 1998 -05
+    | Sat Feb 14 17:32:01 1998 -05
+    | Sun Feb 15 17:32:01 1998 -05
+    | Mon Feb 16 17:32:01 1998 -05
+    | Thu Feb 16 17:32:01 0096 LMT BC
+    | Sun Feb 16 17:32:01 0098 LMT
+    | Fri Feb 16 17:32:01 0598 LMT
+    | Wed Feb 16 17:32:01 1098 LMT
+    | Sun Feb 16 17:32:01 1698 LMT
+    | Fri Feb 16 17:32:01 1798 LMT
+    | Wed Feb 16 17:32:01 1898 QMT
+    | Mon Feb 16 17:32:01 1998 -05
+    | Sun Feb 16 17:32:01 2098 -05
+    | Fri Feb 28 17:32:01 1997 -05
+    | Fri Feb 28 17:32:01 1997 -05
+    | Sat Mar 01 17:32:01 1997 -05
+    | Tue Dec 30 17:32:01 1997 -05
+    | Wed Dec 31 17:32:01 1997 -05
+    | Thu Jan 01 17:32:01 1998 -05
+    | Sat Feb 28 17:32:01 1998 -05
+    | Sun Mar 01 17:32:01 1998 -05
+    | Wed Dec 30 17:32:01 1998 -05
+    | Thu Dec 31 17:32:01 1998 -05
+    | Sun Dec 31 17:32:01 2000 -05
+    | Mon Jan 01 17:32:01 2001 -05
+    | Mon Dec 31 17:32:01 2001 -05
+    | Tue Jan 01 17:32:01 2002 -05
 (66 rows)
 
 SELECT '' AS "64", d1 - interval '1 year' AS one_year FROM TIMESTAMPTZ_TBL;
@@ -784,79 +782,79 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Tue Dec 31 16:00:00 1968 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:02 1996 PST
-    | Sat Feb 10 17:32:01.4 1996 PST
-    | Sat Feb 10 17:32:01.5 1996 PST
-    | Sat Feb 10 17:32:01.6 1996 PST
-    | Tue Jan 02 00:00:00 1996 PST
-    | Tue Jan 02 03:04:05 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Mon Jun 10 17:32:01 1996 PDT
-    | Fri Sep 22 18:19:20 2000 PDT
-    | Mon Mar 15 08:14:01 1999 PST
-    | Mon Mar 15 04:14:02 1999 PST
-    | Mon Mar 15 02:14:03 1999 PST
-    | Mon Mar 15 03:14:04 1999 PST
-    | Mon Mar 15 01:14:05 1999 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:00 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 09:32:01 1996 PST
-    | Sat Feb 10 09:32:01 1996 PST
-    | Sat Feb 10 09:32:01 1996 PST
-    | Sat Feb 10 14:32:01 1996 PST
-    | Wed Jul 10 14:32:01 1996 PDT
-    | Mon Jun 10 18:32:01 1996 PDT
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sun Feb 11 17:32:01 1996 PST
-    | Mon Feb 12 17:32:01 1996 PST
-    | Tue Feb 13 17:32:01 1996 PST
-    | Wed Feb 14 17:32:01 1996 PST
-    | Thu Feb 15 17:32:01 1996 PST
-    | Fri Feb 16 17:32:01 1996 PST
-    | Mon Feb 16 17:32:01 0098 PST BC
-    | Thu Feb 16 17:32:01 0096 PST
-    | Tue Feb 16 17:32:01 0596 PST
-    | Sun Feb 16 17:32:01 1096 PST
-    | Thu Feb 16 17:32:01 1696 PST
-    | Tue Feb 16 17:32:01 1796 PST
-    | Sun Feb 16 17:32:01 1896 PST
-    | Fri Feb 16 17:32:01 1996 PST
-    | Thu Feb 16 17:32:01 2096 PST
-    | Tue Feb 28 17:32:01 1995 PST
-    | Tue Feb 28 17:32:01 1995 PST
-    | Wed Mar 01 17:32:01 1995 PST
-    | Sat Dec 30 17:32:01 1995 PST
-    | Sun Dec 31 17:32:01 1995 PST
-    | Mon Jan 01 17:32:01 1996 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Thu Dec 31 17:32:01 1998 PST
-    | Fri Jan 01 17:32:01 1999 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
+    | Tue Dec 31 19:00:00 1968 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:02 1996 -05
+    | Sat Feb 10 20:32:01.4 1996 -05
+    | Sat Feb 10 20:32:01.5 1996 -05
+    | Sat Feb 10 20:32:01.6 1996 -05
+    | Tue Jan 02 00:00:00 1996 -05
+    | Tue Jan 02 03:04:05 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Mon Jun 10 19:32:01 1996 -05
+    | Fri Sep 22 18:19:20 2000 -05
+    | Mon Mar 15 11:14:01 1999 -05
+    | Mon Mar 15 07:14:02 1999 -05
+    | Mon Mar 15 05:14:03 1999 -05
+    | Mon Mar 15 06:14:04 1999 -05
+    | Mon Mar 15 04:14:05 1999 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 17:32:01 1996 -05
+    | Sat Feb 10 17:32:00 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Wed Oct 02 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 12:32:01 1996 -05
+    | Sat Feb 10 12:32:01 1996 -05
+    | Sat Feb 10 12:32:01 1996 -05
+    | Sat Feb 10 17:32:01 1996 -05
+    | Wed Jul 10 16:32:01 1996 -05
+    | Mon Jun 10 20:32:01 1996 -05
+    | Sat Feb 10 17:32:01 1996 -05
+    | Sun Feb 11 17:32:01 1996 -05
+    | Mon Feb 12 17:32:01 1996 -05
+    | Tue Feb 13 17:32:01 1996 -05
+    | Wed Feb 14 17:32:01 1996 -05
+    | Thu Feb 15 17:32:01 1996 -05
+    | Fri Feb 16 17:32:01 1996 -05
+    | Mon Feb 16 17:32:01 0098 LMT BC
+    | Thu Feb 16 17:32:01 0096 LMT
+    | Tue Feb 16 17:32:01 0596 LMT
+    | Sun Feb 16 17:32:01 1096 LMT
+    | Thu Feb 16 17:32:01 1696 LMT
+    | Tue Feb 16 17:32:01 1796 LMT
+    | Sun Feb 16 17:32:01 1896 QMT
+    | Fri Feb 16 17:32:01 1996 -05
+    | Thu Feb 16 17:32:01 2096 -05
+    | Tue Feb 28 17:32:01 1995 -05
+    | Tue Feb 28 17:32:01 1995 -05
+    | Wed Mar 01 17:32:01 1995 -05
+    | Sat Dec 30 17:32:01 1995 -05
+    | Sun Dec 31 17:32:01 1995 -05
+    | Mon Jan 01 17:32:01 1996 -05
+    | Wed Feb 28 17:32:01 1996 -05
+    | Fri Mar 01 17:32:01 1996 -05
+    | Mon Dec 30 17:32:01 1996 -05
+    | Tue Dec 31 17:32:01 1996 -05
+    | Thu Dec 31 17:32:01 1998 -05
+    | Fri Jan 01 17:32:01 1999 -05
+    | Fri Dec 31 17:32:01 1999 -05
+    | Sat Jan 01 17:32:01 2000 -05
 (66 rows)
 
 --
 -- time, interval arithmetic
 --
 SELECT CAST(time '01:02' AS interval) AS "+01:02";
-     +01:02      
------------------
- @ 1 hour 2 mins
+  +01:02  
+----------
+ 01:02:00
 (1 row)
 
 SELECT CAST(interval '02:03' AS time) AS "02:03:00";
@@ -933,346 +931,346 @@
   WHERE t.d1 BETWEEN '1990-01-01' AND '2001-01-01'
     AND i.f1 BETWEEN '00:00' AND '23:00'
   ORDER BY 1,2;
-             t              |     i     |            add             |          subtract          
-----------------------------+-----------+----------------------------+----------------------------
- Wed Feb 28 17:32:01 1996   | @ 1 min   | Wed Feb 28 17:33:01 1996   | Wed Feb 28 17:31:01 1996
- Wed Feb 28 17:32:01 1996   | @ 5 hours | Wed Feb 28 22:32:01 1996   | Wed Feb 28 12:32:01 1996
- Thu Feb 29 17:32:01 1996   | @ 1 min   | Thu Feb 29 17:33:01 1996   | Thu Feb 29 17:31:01 1996
- Thu Feb 29 17:32:01 1996   | @ 5 hours | Thu Feb 29 22:32:01 1996   | Thu Feb 29 12:32:01 1996
- Fri Mar 01 17:32:01 1996   | @ 1 min   | Fri Mar 01 17:33:01 1996   | Fri Mar 01 17:31:01 1996
- Fri Mar 01 17:32:01 1996   | @ 5 hours | Fri Mar 01 22:32:01 1996   | Fri Mar 01 12:32:01 1996
- Mon Dec 30 17:32:01 1996   | @ 1 min   | Mon Dec 30 17:33:01 1996   | Mon Dec 30 17:31:01 1996
- Mon Dec 30 17:32:01 1996   | @ 5 hours | Mon Dec 30 22:32:01 1996   | Mon Dec 30 12:32:01 1996
- Tue Dec 31 17:32:01 1996   | @ 1 min   | Tue Dec 31 17:33:01 1996   | Tue Dec 31 17:31:01 1996
- Tue Dec 31 17:32:01 1996   | @ 5 hours | Tue Dec 31 22:32:01 1996   | Tue Dec 31 12:32:01 1996
- Wed Jan 01 17:32:01 1997   | @ 1 min   | Wed Jan 01 17:33:01 1997   | Wed Jan 01 17:31:01 1997
- Wed Jan 01 17:32:01 1997   | @ 5 hours | Wed Jan 01 22:32:01 1997   | Wed Jan 01 12:32:01 1997
- Thu Jan 02 00:00:00 1997   | @ 1 min   | Thu Jan 02 00:01:00 1997   | Wed Jan 01 23:59:00 1997
- Thu Jan 02 00:00:00 1997   | @ 5 hours | Thu Jan 02 05:00:00 1997   | Wed Jan 01 19:00:00 1997
- Thu Jan 02 03:04:05 1997   | @ 1 min   | Thu Jan 02 03:05:05 1997   | Thu Jan 02 03:03:05 1997
- Thu Jan 02 03:04:05 1997   | @ 5 hours | Thu Jan 02 08:04:05 1997   | Wed Jan 01 22:04:05 1997
- Mon Feb 10 17:32:00 1997   | @ 1 min   | Mon Feb 10 17:33:00 1997   | Mon Feb 10 17:31:00 1997
- Mon Feb 10 17:32:00 1997   | @ 5 hours | Mon Feb 10 22:32:00 1997   | Mon Feb 10 12:32:00 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01.4 1997 | @ 1 min   | Mon Feb 10 17:33:01.4 1997 | Mon Feb 10 17:31:01.4 1997
- Mon Feb 10 17:32:01.4 1997 | @ 5 hours | Mon Feb 10 22:32:01.4 1997 | Mon Feb 10 12:32:01.4 1997
- Mon Feb 10 17:32:01.5 1997 | @ 1 min   | Mon Feb 10 17:33:01.5 1997 | Mon Feb 10 17:31:01.5 1997
- Mon Feb 10 17:32:01.5 1997 | @ 5 hours | Mon Feb 10 22:32:01.5 1997 | Mon Feb 10 12:32:01.5 1997
- Mon Feb 10 17:32:01.6 1997 | @ 1 min   | Mon Feb 10 17:33:01.6 1997 | Mon Feb 10 17:31:01.6 1997
- Mon Feb 10 17:32:01.6 1997 | @ 5 hours | Mon Feb 10 22:32:01.6 1997 | Mon Feb 10 12:32:01.6 1997
- Mon Feb 10 17:32:02 1997   | @ 1 min   | Mon Feb 10 17:33:02 1997   | Mon Feb 10 17:31:02 1997
- Mon Feb 10 17:32:02 1997   | @ 5 hours | Mon Feb 10 22:32:02 1997   | Mon Feb 10 12:32:02 1997
- Tue Feb 11 17:32:01 1997   | @ 1 min   | Tue Feb 11 17:33:01 1997   | Tue Feb 11 17:31:01 1997
- Tue Feb 11 17:32:01 1997   | @ 5 hours | Tue Feb 11 22:32:01 1997   | Tue Feb 11 12:32:01 1997
- Wed Feb 12 17:32:01 1997   | @ 1 min   | Wed Feb 12 17:33:01 1997   | Wed Feb 12 17:31:01 1997
- Wed Feb 12 17:32:01 1997   | @ 5 hours | Wed Feb 12 22:32:01 1997   | Wed Feb 12 12:32:01 1997
- Thu Feb 13 17:32:01 1997   | @ 1 min   | Thu Feb 13 17:33:01 1997   | Thu Feb 13 17:31:01 1997
- Thu Feb 13 17:32:01 1997   | @ 5 hours | Thu Feb 13 22:32:01 1997   | Thu Feb 13 12:32:01 1997
- Fri Feb 14 17:32:01 1997   | @ 1 min   | Fri Feb 14 17:33:01 1997   | Fri Feb 14 17:31:01 1997
- Fri Feb 14 17:32:01 1997   | @ 5 hours | Fri Feb 14 22:32:01 1997   | Fri Feb 14 12:32:01 1997
- Sat Feb 15 17:32:01 1997   | @ 1 min   | Sat Feb 15 17:33:01 1997   | Sat Feb 15 17:31:01 1997
- Sat Feb 15 17:32:01 1997   | @ 5 hours | Sat Feb 15 22:32:01 1997   | Sat Feb 15 12:32:01 1997
- Sun Feb 16 17:32:01 1997   | @ 1 min   | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
- Sun Feb 16 17:32:01 1997   | @ 1 min   | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
- Sun Feb 16 17:32:01 1997   | @ 5 hours | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
- Sun Feb 16 17:32:01 1997   | @ 5 hours | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
- Fri Feb 28 17:32:01 1997   | @ 1 min   | Fri Feb 28 17:33:01 1997   | Fri Feb 28 17:31:01 1997
- Fri Feb 28 17:32:01 1997   | @ 5 hours | Fri Feb 28 22:32:01 1997   | Fri Feb 28 12:32:01 1997
- Sat Mar 01 17:32:01 1997   | @ 1 min   | Sat Mar 01 17:33:01 1997   | Sat Mar 01 17:31:01 1997
- Sat Mar 01 17:32:01 1997   | @ 5 hours | Sat Mar 01 22:32:01 1997   | Sat Mar 01 12:32:01 1997
- Tue Jun 10 17:32:01 1997   | @ 1 min   | Tue Jun 10 17:33:01 1997   | Tue Jun 10 17:31:01 1997
- Tue Jun 10 17:32:01 1997   | @ 5 hours | Tue Jun 10 22:32:01 1997   | Tue Jun 10 12:32:01 1997
- Tue Jun 10 18:32:01 1997   | @ 1 min   | Tue Jun 10 18:33:01 1997   | Tue Jun 10 18:31:01 1997
- Tue Jun 10 18:32:01 1997   | @ 5 hours | Tue Jun 10 23:32:01 1997   | Tue Jun 10 13:32:01 1997
- Tue Dec 30 17:32:01 1997   | @ 1 min   | Tue Dec 30 17:33:01 1997   | Tue Dec 30 17:31:01 1997
- Tue Dec 30 17:32:01 1997   | @ 5 hours | Tue Dec 30 22:32:01 1997   | Tue Dec 30 12:32:01 1997
- Wed Dec 31 17:32:01 1997   | @ 1 min   | Wed Dec 31 17:33:01 1997   | Wed Dec 31 17:31:01 1997
- Wed Dec 31 17:32:01 1997   | @ 5 hours | Wed Dec 31 22:32:01 1997   | Wed Dec 31 12:32:01 1997
- Fri Dec 31 17:32:01 1999   | @ 1 min   | Fri Dec 31 17:33:01 1999   | Fri Dec 31 17:31:01 1999
- Fri Dec 31 17:32:01 1999   | @ 5 hours | Fri Dec 31 22:32:01 1999   | Fri Dec 31 12:32:01 1999
- Sat Jan 01 17:32:01 2000   | @ 1 min   | Sat Jan 01 17:33:01 2000   | Sat Jan 01 17:31:01 2000
- Sat Jan 01 17:32:01 2000   | @ 5 hours | Sat Jan 01 22:32:01 2000   | Sat Jan 01 12:32:01 2000
- Wed Mar 15 02:14:05 2000   | @ 1 min   | Wed Mar 15 02:15:05 2000   | Wed Mar 15 02:13:05 2000
- Wed Mar 15 02:14:05 2000   | @ 5 hours | Wed Mar 15 07:14:05 2000   | Tue Mar 14 21:14:05 2000
- Wed Mar 15 03:14:04 2000   | @ 1 min   | Wed Mar 15 03:15:04 2000   | Wed Mar 15 03:13:04 2000
- Wed Mar 15 03:14:04 2000   | @ 5 hours | Wed Mar 15 08:14:04 2000   | Tue Mar 14 22:14:04 2000
- Wed Mar 15 08:14:01 2000   | @ 1 min   | Wed Mar 15 08:15:01 2000   | Wed Mar 15 08:13:01 2000
- Wed Mar 15 08:14:01 2000   | @ 5 hours | Wed Mar 15 13:14:01 2000   | Wed Mar 15 03:14:01 2000
- Wed Mar 15 12:14:03 2000   | @ 1 min   | Wed Mar 15 12:15:03 2000   | Wed Mar 15 12:13:03 2000
- Wed Mar 15 12:14:03 2000   | @ 5 hours | Wed Mar 15 17:14:03 2000   | Wed Mar 15 07:14:03 2000
- Wed Mar 15 13:14:02 2000   | @ 1 min   | Wed Mar 15 13:15:02 2000   | Wed Mar 15 13:13:02 2000
- Wed Mar 15 13:14:02 2000   | @ 5 hours | Wed Mar 15 18:14:02 2000   | Wed Mar 15 08:14:02 2000
- Sun Dec 31 17:32:01 2000   | @ 1 min   | Sun Dec 31 17:33:01 2000   | Sun Dec 31 17:31:01 2000
- Sun Dec 31 17:32:01 2000   | @ 5 hours | Sun Dec 31 22:32:01 2000   | Sun Dec 31 12:32:01 2000
+             t              |    i     |            add             |          subtract          
+----------------------------+----------+----------------------------+----------------------------
+ Wed Feb 28 17:32:01 1996   | 00:01:00 | Wed Feb 28 17:33:01 1996   | Wed Feb 28 17:31:01 1996
+ Wed Feb 28 17:32:01 1996   | 05:00:00 | Wed Feb 28 22:32:01 1996   | Wed Feb 28 12:32:01 1996
+ Thu Feb 29 17:32:01 1996   | 00:01:00 | Thu Feb 29 17:33:01 1996   | Thu Feb 29 17:31:01 1996
+ Thu Feb 29 17:32:01 1996   | 05:00:00 | Thu Feb 29 22:32:01 1996   | Thu Feb 29 12:32:01 1996
+ Fri Mar 01 17:32:01 1996   | 00:01:00 | Fri Mar 01 17:33:01 1996   | Fri Mar 01 17:31:01 1996
+ Fri Mar 01 17:32:01 1996   | 05:00:00 | Fri Mar 01 22:32:01 1996   | Fri Mar 01 12:32:01 1996
+ Mon Dec 30 17:32:01 1996   | 00:01:00 | Mon Dec 30 17:33:01 1996   | Mon Dec 30 17:31:01 1996
+ Mon Dec 30 17:32:01 1996   | 05:00:00 | Mon Dec 30 22:32:01 1996   | Mon Dec 30 12:32:01 1996
+ Tue Dec 31 17:32:01 1996   | 00:01:00 | Tue Dec 31 17:33:01 1996   | Tue Dec 31 17:31:01 1996
+ Tue Dec 31 17:32:01 1996   | 05:00:00 | Tue Dec 31 22:32:01 1996   | Tue Dec 31 12:32:01 1996
+ Wed Jan 01 17:32:01 1997   | 00:01:00 | Wed Jan 01 17:33:01 1997   | Wed Jan 01 17:31:01 1997
+ Wed Jan 01 17:32:01 1997   | 05:00:00 | Wed Jan 01 22:32:01 1997   | Wed Jan 01 12:32:01 1997
+ Thu Jan 02 00:00:00 1997   | 00:01:00 | Thu Jan 02 00:01:00 1997   | Wed Jan 01 23:59:00 1997
+ Thu Jan 02 00:00:00 1997   | 05:00:00 | Thu Jan 02 05:00:00 1997   | Wed Jan 01 19:00:00 1997
+ Thu Jan 02 03:04:05 1997   | 00:01:00 | Thu Jan 02 03:05:05 1997   | Thu Jan 02 03:03:05 1997
+ Thu Jan 02 03:04:05 1997   | 05:00:00 | Thu Jan 02 08:04:05 1997   | Wed Jan 01 22:04:05 1997
+ Mon Feb 10 17:32:00 1997   | 00:01:00 | Mon Feb 10 17:33:00 1997   | Mon Feb 10 17:31:00 1997
+ Mon Feb 10 17:32:00 1997   | 05:00:00 | Mon Feb 10 22:32:00 1997   | Mon Feb 10 12:32:00 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01.4 1997 | 00:01:00 | Mon Feb 10 17:33:01.4 1997 | Mon Feb 10 17:31:01.4 1997
+ Mon Feb 10 17:32:01.4 1997 | 05:00:00 | Mon Feb 10 22:32:01.4 1997 | Mon Feb 10 12:32:01.4 1997
+ Mon Feb 10 17:32:01.5 1997 | 00:01:00 | Mon Feb 10 17:33:01.5 1997 | Mon Feb 10 17:31:01.5 1997
+ Mon Feb 10 17:32:01.5 1997 | 05:00:00 | Mon Feb 10 22:32:01.5 1997 | Mon Feb 10 12:32:01.5 1997
+ Mon Feb 10 17:32:01.6 1997 | 00:01:00 | Mon Feb 10 17:33:01.6 1997 | Mon Feb 10 17:31:01.6 1997
+ Mon Feb 10 17:32:01.6 1997 | 05:00:00 | Mon Feb 10 22:32:01.6 1997 | Mon Feb 10 12:32:01.6 1997
+ Mon Feb 10 17:32:02 1997   | 00:01:00 | Mon Feb 10 17:33:02 1997   | Mon Feb 10 17:31:02 1997
+ Mon Feb 10 17:32:02 1997   | 05:00:00 | Mon Feb 10 22:32:02 1997   | Mon Feb 10 12:32:02 1997
+ Tue Feb 11 17:32:01 1997   | 00:01:00 | Tue Feb 11 17:33:01 1997   | Tue Feb 11 17:31:01 1997
+ Tue Feb 11 17:32:01 1997   | 05:00:00 | Tue Feb 11 22:32:01 1997   | Tue Feb 11 12:32:01 1997
+ Wed Feb 12 17:32:01 1997   | 00:01:00 | Wed Feb 12 17:33:01 1997   | Wed Feb 12 17:31:01 1997
+ Wed Feb 12 17:32:01 1997   | 05:00:00 | Wed Feb 12 22:32:01 1997   | Wed Feb 12 12:32:01 1997
+ Thu Feb 13 17:32:01 1997   | 00:01:00 | Thu Feb 13 17:33:01 1997   | Thu Feb 13 17:31:01 1997
+ Thu Feb 13 17:32:01 1997   | 05:00:00 | Thu Feb 13 22:32:01 1997   | Thu Feb 13 12:32:01 1997
+ Fri Feb 14 17:32:01 1997   | 00:01:00 | Fri Feb 14 17:33:01 1997   | Fri Feb 14 17:31:01 1997
+ Fri Feb 14 17:32:01 1997   | 05:00:00 | Fri Feb 14 22:32:01 1997   | Fri Feb 14 12:32:01 1997
+ Sat Feb 15 17:32:01 1997   | 00:01:00 | Sat Feb 15 17:33:01 1997   | Sat Feb 15 17:31:01 1997
+ Sat Feb 15 17:32:01 1997   | 05:00:00 | Sat Feb 15 22:32:01 1997   | Sat Feb 15 12:32:01 1997
+ Sun Feb 16 17:32:01 1997   | 00:01:00 | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
+ Sun Feb 16 17:32:01 1997   | 00:01:00 | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
+ Sun Feb 16 17:32:01 1997   | 05:00:00 | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
+ Sun Feb 16 17:32:01 1997   | 05:00:00 | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
+ Fri Feb 28 17:32:01 1997   | 00:01:00 | Fri Feb 28 17:33:01 1997   | Fri Feb 28 17:31:01 1997
+ Fri Feb 28 17:32:01 1997   | 05:00:00 | Fri Feb 28 22:32:01 1997   | Fri Feb 28 12:32:01 1997
+ Sat Mar 01 17:32:01 1997   | 00:01:00 | Sat Mar 01 17:33:01 1997   | Sat Mar 01 17:31:01 1997
+ Sat Mar 01 17:32:01 1997   | 05:00:00 | Sat Mar 01 22:32:01 1997   | Sat Mar 01 12:32:01 1997
+ Tue Jun 10 17:32:01 1997   | 00:01:00 | Tue Jun 10 17:33:01 1997   | Tue Jun 10 17:31:01 1997
+ Tue Jun 10 17:32:01 1997   | 05:00:00 | Tue Jun 10 22:32:01 1997   | Tue Jun 10 12:32:01 1997
+ Tue Jun 10 18:32:01 1997   | 00:01:00 | Tue Jun 10 18:33:01 1997   | Tue Jun 10 18:31:01 1997
+ Tue Jun 10 18:32:01 1997   | 05:00:00 | Tue Jun 10 23:32:01 1997   | Tue Jun 10 13:32:01 1997
+ Thu Oct 02 17:32:01 1997   | 00:01:00 | Thu Oct 02 17:33:01 1997   | Thu Oct 02 17:31:01 1997
+ Thu Oct 02 17:32:01 1997   | 05:00:00 | Thu Oct 02 22:32:01 1997   | Thu Oct 02 12:32:01 1997
+ Tue Dec 30 17:32:01 1997   | 00:01:00 | Tue Dec 30 17:33:01 1997   | Tue Dec 30 17:31:01 1997
+ Tue Dec 30 17:32:01 1997   | 05:00:00 | Tue Dec 30 22:32:01 1997   | Tue Dec 30 12:32:01 1997
+ Wed Dec 31 17:32:01 1997   | 00:01:00 | Wed Dec 31 17:33:01 1997   | Wed Dec 31 17:31:01 1997
+ Wed Dec 31 17:32:01 1997   | 05:00:00 | Wed Dec 31 22:32:01 1997   | Wed Dec 31 12:32:01 1997
+ Fri Dec 31 17:32:01 1999   | 00:01:00 | Fri Dec 31 17:33:01 1999   | Fri Dec 31 17:31:01 1999
+ Fri Dec 31 17:32:01 1999   | 05:00:00 | Fri Dec 31 22:32:01 1999   | Fri Dec 31 12:32:01 1999
+ Sat Jan 01 17:32:01 2000   | 00:01:00 | Sat Jan 01 17:33:01 2000   | Sat Jan 01 17:31:01 2000
+ Sat Jan 01 17:32:01 2000   | 05:00:00 | Sat Jan 01 22:32:01 2000   | Sat Jan 01 12:32:01 2000
+ Wed Mar 15 02:14:05 2000   | 00:01:00 | Wed Mar 15 02:15:05 2000   | Wed Mar 15 02:13:05 2000
+ Wed Mar 15 02:14:05 2000   | 05:00:00 | Wed Mar 15 07:14:05 2000   | Tue Mar 14 21:14:05 2000
+ Wed Mar 15 03:14:04 2000   | 00:01:00 | Wed Mar 15 03:15:04 2000   | Wed Mar 15 03:13:04 2000
+ Wed Mar 15 03:14:04 2000   | 05:00:00 | Wed Mar 15 08:14:04 2000   | Tue Mar 14 22:14:04 2000
+ Wed Mar 15 08:14:01 2000   | 00:01:00 | Wed Mar 15 08:15:01 2000   | Wed Mar 15 08:13:01 2000
+ Wed Mar 15 08:14:01 2000   | 05:00:00 | Wed Mar 15 13:14:01 2000   | Wed Mar 15 03:14:01 2000
+ Wed Mar 15 12:14:03 2000   | 00:01:00 | Wed Mar 15 12:15:03 2000   | Wed Mar 15 12:13:03 2000
+ Wed Mar 15 12:14:03 2000   | 05:00:00 | Wed Mar 15 17:14:03 2000   | Wed Mar 15 07:14:03 2000
+ Wed Mar 15 13:14:02 2000   | 00:01:00 | Wed Mar 15 13:15:02 2000   | Wed Mar 15 13:13:02 2000
+ Wed Mar 15 13:14:02 2000   | 05:00:00 | Wed Mar 15 18:14:02 2000   | Wed Mar 15 08:14:02 2000
+ Sun Dec 31 17:32:01 2000   | 00:01:00 | Sun Dec 31 17:33:01 2000   | Sun Dec 31 17:31:01 2000
+ Sun Dec 31 17:32:01 2000   | 05:00:00 | Sun Dec 31 22:32:01 2000   | Sun Dec 31 12:32:01 2000
 (104 rows)
 
 SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS "add", t.f1 - i.f1 AS "subtract"
   FROM TIME_TBL t, INTERVAL_TBL i
   ORDER BY 1,2;
-      t      |               i               |     add     |  subtract   
--------------+-------------------------------+-------------+-------------
- 00:00:00    | @ 14 secs ago                 | 23:59:46    | 00:00:14
- 00:00:00    | @ 1 min                       | 00:01:00    | 23:59:00
- 00:00:00    | @ 5 hours                     | 05:00:00    | 19:00:00
- 00:00:00    | @ 1 day 2 hours 3 mins 4 secs | 02:03:04    | 21:56:56
- 00:00:00    | @ 10 days                     | 00:00:00    | 00:00:00
- 00:00:00    | @ 3 mons                      | 00:00:00    | 00:00:00
- 00:00:00    | @ 5 mons                      | 00:00:00    | 00:00:00
- 00:00:00    | @ 5 mons 12 hours             | 12:00:00    | 12:00:00
- 00:00:00    | @ 6 years                     | 00:00:00    | 00:00:00
- 00:00:00    | @ 34 years                    | 00:00:00    | 00:00:00
- 01:00:00    | @ 14 secs ago                 | 00:59:46    | 01:00:14
- 01:00:00    | @ 1 min                       | 01:01:00    | 00:59:00
- 01:00:00    | @ 5 hours                     | 06:00:00    | 20:00:00
- 01:00:00    | @ 1 day 2 hours 3 mins 4 secs | 03:03:04    | 22:56:56
- 01:00:00    | @ 10 days                     | 01:00:00    | 01:00:00
- 01:00:00    | @ 3 mons                      | 01:00:00    | 01:00:00
- 01:00:00    | @ 5 mons                      | 01:00:00    | 01:00:00
- 01:00:00    | @ 5 mons 12 hours             | 13:00:00    | 13:00:00
- 01:00:00    | @ 6 years                     | 01:00:00    | 01:00:00
- 01:00:00    | @ 34 years                    | 01:00:00    | 01:00:00
- 02:03:00    | @ 14 secs ago                 | 02:02:46    | 02:03:14
- 02:03:00    | @ 1 min                       | 02:04:00    | 02:02:00
- 02:03:00    | @ 5 hours                     | 07:03:00    | 21:03:00
- 02:03:00    | @ 1 day 2 hours 3 mins 4 secs | 04:06:04    | 23:59:56
- 02:03:00    | @ 10 days                     | 02:03:00    | 02:03:00
- 02:03:00    | @ 3 mons                      | 02:03:00    | 02:03:00
- 02:03:00    | @ 5 mons                      | 02:03:00    | 02:03:00
- 02:03:00    | @ 5 mons 12 hours             | 14:03:00    | 14:03:00
- 02:03:00    | @ 6 years                     | 02:03:00    | 02:03:00
- 02:03:00    | @ 34 years                    | 02:03:00    | 02:03:00
- 11:59:00    | @ 14 secs ago                 | 11:58:46    | 11:59:14
- 11:59:00    | @ 1 min                       | 12:00:00    | 11:58:00
- 11:59:00    | @ 5 hours                     | 16:59:00    | 06:59:00
- 11:59:00    | @ 1 day 2 hours 3 mins 4 secs | 14:02:04    | 09:55:56
- 11:59:00    | @ 10 days                     | 11:59:00    | 11:59:00
- 11:59:00    | @ 3 mons                      | 11:59:00    | 11:59:00
- 11:59:00    | @ 5 mons                      | 11:59:00    | 11:59:00
- 11:59:00    | @ 5 mons 12 hours             | 23:59:00    | 23:59:00
- 11:59:00    | @ 6 years                     | 11:59:00    | 11:59:00
- 11:59:00    | @ 34 years                    | 11:59:00    | 11:59:00
- 12:00:00    | @ 14 secs ago                 | 11:59:46    | 12:00:14
- 12:00:00    | @ 1 min                       | 12:01:00    | 11:59:00
- 12:00:00    | @ 5 hours                     | 17:00:00    | 07:00:00
- 12:00:00    | @ 1 day 2 hours 3 mins 4 secs | 14:03:04    | 09:56:56
- 12:00:00    | @ 10 days                     | 12:00:00    | 12:00:00
- 12:00:00    | @ 3 mons                      | 12:00:00    | 12:00:00
- 12:00:00    | @ 5 mons                      | 12:00:00    | 12:00:00
- 12:00:00    | @ 5 mons 12 hours             | 00:00:00    | 00:00:00
- 12:00:00    | @ 6 years                     | 12:00:00    | 12:00:00
- 12:00:00    | @ 34 years                    | 12:00:00    | 12:00:00
- 12:01:00    | @ 14 secs ago                 | 12:00:46    | 12:01:14
- 12:01:00    | @ 1 min                       | 12:02:00    | 12:00:00
- 12:01:00    | @ 5 hours                     | 17:01:00    | 07:01:00
- 12:01:00    | @ 1 day 2 hours 3 mins 4 secs | 14:04:04    | 09:57:56
- 12:01:00    | @ 10 days                     | 12:01:00    | 12:01:00
- 12:01:00    | @ 3 mons                      | 12:01:00    | 12:01:00
- 12:01:00    | @ 5 mons                      | 12:01:00    | 12:01:00
- 12:01:00    | @ 5 mons 12 hours             | 00:01:00    | 00:01:00
- 12:01:00    | @ 6 years                     | 12:01:00    | 12:01:00
- 12:01:00    | @ 34 years                    | 12:01:00    | 12:01:00
- 15:36:39    | @ 14 secs ago                 | 15:36:25    | 15:36:53
- 15:36:39    | @ 14 secs ago                 | 15:36:25    | 15:36:53
- 15:36:39    | @ 1 min                       | 15:37:39    | 15:35:39
- 15:36:39    | @ 1 min                       | 15:37:39    | 15:35:39
- 15:36:39    | @ 5 hours                     | 20:36:39    | 10:36:39
- 15:36:39    | @ 5 hours                     | 20:36:39    | 10:36:39
- 15:36:39    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43    | 13:33:35
- 15:36:39    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43    | 13:33:35
- 15:36:39    | @ 10 days                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 10 days                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 3 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 3 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 5 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 5 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 5 mons 12 hours             | 03:36:39    | 03:36:39
- 15:36:39    | @ 5 mons 12 hours             | 03:36:39    | 03:36:39
- 15:36:39    | @ 6 years                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 6 years                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 34 years                    | 15:36:39    | 15:36:39
- 15:36:39    | @ 34 years                    | 15:36:39    | 15:36:39
- 23:59:00    | @ 14 secs ago                 | 23:58:46    | 23:59:14
- 23:59:00    | @ 1 min                       | 00:00:00    | 23:58:00
- 23:59:00    | @ 5 hours                     | 04:59:00    | 18:59:00
- 23:59:00    | @ 1 day 2 hours 3 mins 4 secs | 02:02:04    | 21:55:56
- 23:59:00    | @ 10 days                     | 23:59:00    | 23:59:00
- 23:59:00    | @ 3 mons                      | 23:59:00    | 23:59:00
- 23:59:00    | @ 5 mons                      | 23:59:00    | 23:59:00
- 23:59:00    | @ 5 mons 12 hours             | 11:59:00    | 11:59:00
- 23:59:00    | @ 6 years                     | 23:59:00    | 23:59:00
- 23:59:00    | @ 34 years                    | 23:59:00    | 23:59:00
- 23:59:59.99 | @ 14 secs ago                 | 23:59:45.99 | 00:00:13.99
- 23:59:59.99 | @ 1 min                       | 00:00:59.99 | 23:58:59.99
- 23:59:59.99 | @ 5 hours                     | 04:59:59.99 | 18:59:59.99
- 23:59:59.99 | @ 1 day 2 hours 3 mins 4 secs | 02:03:03.99 | 21:56:55.99
- 23:59:59.99 | @ 10 days                     | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 3 mons                      | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 5 mons                      | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 5 mons 12 hours             | 11:59:59.99 | 11:59:59.99
- 23:59:59.99 | @ 6 years                     | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 34 years                    | 23:59:59.99 | 23:59:59.99
+      t      |        i        |     add     |  subtract   
+-------------+-----------------+-------------+-------------
+ 00:00:00    | -00:00:14       | 23:59:46    | 00:00:14
+ 00:00:00    | 00:01:00        | 00:01:00    | 23:59:00
+ 00:00:00    | 05:00:00        | 05:00:00    | 19:00:00
+ 00:00:00    | 1 day 02:03:04  | 02:03:04    | 21:56:56
+ 00:00:00    | 10 days         | 00:00:00    | 00:00:00
+ 00:00:00    | 3 mons          | 00:00:00    | 00:00:00
+ 00:00:00    | 5 mons          | 00:00:00    | 00:00:00
+ 00:00:00    | 5 mons 12:00:00 | 12:00:00    | 12:00:00
+ 00:00:00    | 6 years         | 00:00:00    | 00:00:00
+ 00:00:00    | 34 years        | 00:00:00    | 00:00:00
+ 01:00:00    | -00:00:14       | 00:59:46    | 01:00:14
+ 01:00:00    | 00:01:00        | 01:01:00    | 00:59:00
+ 01:00:00    | 05:00:00        | 06:00:00    | 20:00:00
+ 01:00:00    | 1 day 02:03:04  | 03:03:04    | 22:56:56
+ 01:00:00    | 10 days         | 01:00:00    | 01:00:00
+ 01:00:00    | 3 mons          | 01:00:00    | 01:00:00
+ 01:00:00    | 5 mons          | 01:00:00    | 01:00:00
+ 01:00:00    | 5 mons 12:00:00 | 13:00:00    | 13:00:00
+ 01:00:00    | 6 years         | 01:00:00    | 01:00:00
+ 01:00:00    | 34 years        | 01:00:00    | 01:00:00
+ 02:03:00    | -00:00:14       | 02:02:46    | 02:03:14
+ 02:03:00    | 00:01:00        | 02:04:00    | 02:02:00
+ 02:03:00    | 05:00:00        | 07:03:00    | 21:03:00
+ 02:03:00    | 1 day 02:03:04  | 04:06:04    | 23:59:56
+ 02:03:00    | 10 days         | 02:03:00    | 02:03:00
+ 02:03:00    | 3 mons          | 02:03:00    | 02:03:00
+ 02:03:00    | 5 mons          | 02:03:00    | 02:03:00
+ 02:03:00    | 5 mons 12:00:00 | 14:03:00    | 14:03:00
+ 02:03:00    | 6 years         | 02:03:00    | 02:03:00
+ 02:03:00    | 34 years        | 02:03:00    | 02:03:00
+ 11:59:00    | -00:00:14       | 11:58:46    | 11:59:14
+ 11:59:00    | 00:01:00        | 12:00:00    | 11:58:00
+ 11:59:00    | 05:00:00        | 16:59:00    | 06:59:00
+ 11:59:00    | 1 day 02:03:04  | 14:02:04    | 09:55:56
+ 11:59:00    | 10 days         | 11:59:00    | 11:59:00
+ 11:59:00    | 3 mons          | 11:59:00    | 11:59:00
+ 11:59:00    | 5 mons          | 11:59:00    | 11:59:00
+ 11:59:00    | 5 mons 12:00:00 | 23:59:00    | 23:59:00
+ 11:59:00    | 6 years         | 11:59:00    | 11:59:00
+ 11:59:00    | 34 years        | 11:59:00    | 11:59:00
+ 12:00:00    | -00:00:14       | 11:59:46    | 12:00:14
+ 12:00:00    | 00:01:00        | 12:01:00    | 11:59:00
+ 12:00:00    | 05:00:00        | 17:00:00    | 07:00:00
+ 12:00:00    | 1 day 02:03:04  | 14:03:04    | 09:56:56
+ 12:00:00    | 10 days         | 12:00:00    | 12:00:00
+ 12:00:00    | 3 mons          | 12:00:00    | 12:00:00
+ 12:00:00    | 5 mons          | 12:00:00    | 12:00:00
+ 12:00:00    | 5 mons 12:00:00 | 00:00:00    | 00:00:00
+ 12:00:00    | 6 years         | 12:00:00    | 12:00:00
+ 12:00:00    | 34 years        | 12:00:00    | 12:00:00
+ 12:01:00    | -00:00:14       | 12:00:46    | 12:01:14
+ 12:01:00    | 00:01:00        | 12:02:00    | 12:00:00
+ 12:01:00    | 05:00:00        | 17:01:00    | 07:01:00
+ 12:01:00    | 1 day 02:03:04  | 14:04:04    | 09:57:56
+ 12:01:00    | 10 days         | 12:01:00    | 12:01:00
+ 12:01:00    | 3 mons          | 12:01:00    | 12:01:00
+ 12:01:00    | 5 mons          | 12:01:00    | 12:01:00
+ 12:01:00    | 5 mons 12:00:00 | 00:01:00    | 00:01:00
+ 12:01:00    | 6 years         | 12:01:00    | 12:01:00
+ 12:01:00    | 34 years        | 12:01:00    | 12:01:00
+ 15:36:39    | -00:00:14       | 15:36:25    | 15:36:53
+ 15:36:39    | -00:00:14       | 15:36:25    | 15:36:53
+ 15:36:39    | 00:01:00        | 15:37:39    | 15:35:39
+ 15:36:39    | 00:01:00        | 15:37:39    | 15:35:39
+ 15:36:39    | 05:00:00        | 20:36:39    | 10:36:39
+ 15:36:39    | 05:00:00        | 20:36:39    | 10:36:39
+ 15:36:39    | 1 day 02:03:04  | 17:39:43    | 13:33:35
+ 15:36:39    | 1 day 02:03:04  | 17:39:43    | 13:33:35
+ 15:36:39    | 10 days         | 15:36:39    | 15:36:39
+ 15:36:39    | 10 days         | 15:36:39    | 15:36:39
+ 15:36:39    | 3 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 3 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 5 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 5 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 5 mons 12:00:00 | 03:36:39    | 03:36:39
+ 15:36:39    | 5 mons 12:00:00 | 03:36:39    | 03:36:39
+ 15:36:39    | 6 years         | 15:36:39    | 15:36:39
+ 15:36:39    | 6 years         | 15:36:39    | 15:36:39
+ 15:36:39    | 34 years        | 15:36:39    | 15:36:39
+ 15:36:39    | 34 years        | 15:36:39    | 15:36:39
+ 23:59:00    | -00:00:14       | 23:58:46    | 23:59:14
+ 23:59:00    | 00:01:00        | 00:00:00    | 23:58:00
+ 23:59:00    | 05:00:00        | 04:59:00    | 18:59:00
+ 23:59:00    | 1 day 02:03:04  | 02:02:04    | 21:55:56
+ 23:59:00    | 10 days         | 23:59:00    | 23:59:00
+ 23:59:00    | 3 mons          | 23:59:00    | 23:59:00
+ 23:59:00    | 5 mons          | 23:59:00    | 23:59:00
+ 23:59:00    | 5 mons 12:00:00 | 11:59:00    | 11:59:00
+ 23:59:00    | 6 years         | 23:59:00    | 23:59:00
+ 23:59:00    | 34 years        | 23:59:00    | 23:59:00
+ 23:59:59.99 | -00:00:14       | 23:59:45.99 | 00:00:13.99
+ 23:59:59.99 | 00:01:00        | 00:00:59.99 | 23:58:59.99
+ 23:59:59.99 | 05:00:00        | 04:59:59.99 | 18:59:59.99
+ 23:59:59.99 | 1 day 02:03:04  | 02:03:03.99 | 21:56:55.99
+ 23:59:59.99 | 10 days         | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 3 mons          | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 5 mons          | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 5 mons 12:00:00 | 11:59:59.99 | 11:59:59.99
+ 23:59:59.99 | 6 years         | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 34 years        | 23:59:59.99 | 23:59:59.99
 (100 rows)
 
 SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS "add", t.f1 - i.f1 AS "subtract"
   FROM TIMETZ_TBL t, INTERVAL_TBL i
   ORDER BY 1,2;
-       t        |               i               |      add       |    subtract    
-----------------+-------------------------------+----------------+----------------
- 00:01:00-07    | @ 14 secs ago                 | 00:00:46-07    | 00:01:14-07
- 00:01:00-07    | @ 1 min                       | 00:02:00-07    | 00:00:00-07
- 00:01:00-07    | @ 5 hours                     | 05:01:00-07    | 19:01:00-07
- 00:01:00-07    | @ 1 day 2 hours 3 mins 4 secs | 02:04:04-07    | 21:57:56-07
- 00:01:00-07    | @ 10 days                     | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 3 mons                      | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 5 mons                      | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 5 mons 12 hours             | 12:01:00-07    | 12:01:00-07
- 00:01:00-07    | @ 6 years                     | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 34 years                    | 00:01:00-07    | 00:01:00-07
- 01:00:00-07    | @ 14 secs ago                 | 00:59:46-07    | 01:00:14-07
- 01:00:00-07    | @ 1 min                       | 01:01:00-07    | 00:59:00-07
- 01:00:00-07    | @ 5 hours                     | 06:00:00-07    | 20:00:00-07
- 01:00:00-07    | @ 1 day 2 hours 3 mins 4 secs | 03:03:04-07    | 22:56:56-07
- 01:00:00-07    | @ 10 days                     | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 3 mons                      | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 5 mons                      | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 5 mons 12 hours             | 13:00:00-07    | 13:00:00-07
- 01:00:00-07    | @ 6 years                     | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 34 years                    | 01:00:00-07    | 01:00:00-07
- 02:03:00-07    | @ 14 secs ago                 | 02:02:46-07    | 02:03:14-07
- 02:03:00-07    | @ 1 min                       | 02:04:00-07    | 02:02:00-07
- 02:03:00-07    | @ 5 hours                     | 07:03:00-07    | 21:03:00-07
- 02:03:00-07    | @ 1 day 2 hours 3 mins 4 secs | 04:06:04-07    | 23:59:56-07
- 02:03:00-07    | @ 10 days                     | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 3 mons                      | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 5 mons                      | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 5 mons 12 hours             | 14:03:00-07    | 14:03:00-07
- 02:03:00-07    | @ 6 years                     | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 34 years                    | 02:03:00-07    | 02:03:00-07
- 08:08:00-04    | @ 14 secs ago                 | 08:07:46-04    | 08:08:14-04
- 08:08:00-04    | @ 1 min                       | 08:09:00-04    | 08:07:00-04
- 08:08:00-04    | @ 5 hours                     | 13:08:00-04    | 03:08:00-04
- 08:08:00-04    | @ 1 day 2 hours 3 mins 4 secs | 10:11:04-04    | 06:04:56-04
- 08:08:00-04    | @ 10 days                     | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 3 mons                      | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 5 mons                      | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 5 mons 12 hours             | 20:08:00-04    | 20:08:00-04
- 08:08:00-04    | @ 6 years                     | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 34 years                    | 08:08:00-04    | 08:08:00-04
- 07:07:00-08    | @ 14 secs ago                 | 07:06:46-08    | 07:07:14-08
- 07:07:00-08    | @ 1 min                       | 07:08:00-08    | 07:06:00-08
- 07:07:00-08    | @ 5 hours                     | 12:07:00-08    | 02:07:00-08
- 07:07:00-08    | @ 1 day 2 hours 3 mins 4 secs | 09:10:04-08    | 05:03:56-08
- 07:07:00-08    | @ 10 days                     | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 3 mons                      | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 5 mons                      | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 5 mons 12 hours             | 19:07:00-08    | 19:07:00-08
- 07:07:00-08    | @ 6 years                     | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 34 years                    | 07:07:00-08    | 07:07:00-08
- 11:59:00-07    | @ 14 secs ago                 | 11:58:46-07    | 11:59:14-07
- 11:59:00-07    | @ 1 min                       | 12:00:00-07    | 11:58:00-07
- 11:59:00-07    | @ 5 hours                     | 16:59:00-07    | 06:59:00-07
- 11:59:00-07    | @ 1 day 2 hours 3 mins 4 secs | 14:02:04-07    | 09:55:56-07
- 11:59:00-07    | @ 10 days                     | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 3 mons                      | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 5 mons                      | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 5 mons 12 hours             | 23:59:00-07    | 23:59:00-07
- 11:59:00-07    | @ 6 years                     | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 34 years                    | 11:59:00-07    | 11:59:00-07
- 12:00:00-07    | @ 14 secs ago                 | 11:59:46-07    | 12:00:14-07
- 12:00:00-07    | @ 1 min                       | 12:01:00-07    | 11:59:00-07
- 12:00:00-07    | @ 5 hours                     | 17:00:00-07    | 07:00:00-07
- 12:00:00-07    | @ 1 day 2 hours 3 mins 4 secs | 14:03:04-07    | 09:56:56-07
- 12:00:00-07    | @ 10 days                     | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 3 mons                      | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 5 mons                      | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 5 mons 12 hours             | 00:00:00-07    | 00:00:00-07
- 12:00:00-07    | @ 6 years                     | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 34 years                    | 12:00:00-07    | 12:00:00-07
- 12:01:00-07    | @ 14 secs ago                 | 12:00:46-07    | 12:01:14-07
- 12:01:00-07    | @ 1 min                       | 12:02:00-07    | 12:00:00-07
- 12:01:00-07    | @ 5 hours                     | 17:01:00-07    | 07:01:00-07
- 12:01:00-07    | @ 1 day 2 hours 3 mins 4 secs | 14:04:04-07    | 09:57:56-07
- 12:01:00-07    | @ 10 days                     | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 3 mons                      | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 5 mons                      | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 5 mons 12 hours             | 00:01:00-07    | 00:01:00-07
- 12:01:00-07    | @ 6 years                     | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 34 years                    | 12:01:00-07    | 12:01:00-07
- 15:36:39-04    | @ 14 secs ago                 | 15:36:25-04    | 15:36:53-04
- 15:36:39-04    | @ 1 min                       | 15:37:39-04    | 15:35:39-04
- 15:36:39-04    | @ 5 hours                     | 20:36:39-04    | 10:36:39-04
- 15:36:39-04    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43-04    | 13:33:35-04
- 15:36:39-04    | @ 10 days                     | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 3 mons                      | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 5 mons                      | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 5 mons 12 hours             | 03:36:39-04    | 03:36:39-04
- 15:36:39-04    | @ 6 years                     | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 34 years                    | 15:36:39-04    | 15:36:39-04
- 15:36:39-05    | @ 14 secs ago                 | 15:36:25-05    | 15:36:53-05
- 15:36:39-05    | @ 1 min                       | 15:37:39-05    | 15:35:39-05
- 15:36:39-05    | @ 5 hours                     | 20:36:39-05    | 10:36:39-05
- 15:36:39-05    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43-05    | 13:33:35-05
- 15:36:39-05    | @ 10 days                     | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 3 mons                      | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 5 mons                      | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 5 mons 12 hours             | 03:36:39-05    | 03:36:39-05
- 15:36:39-05    | @ 6 years                     | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 34 years                    | 15:36:39-05    | 15:36:39-05
- 23:59:00-07    | @ 14 secs ago                 | 23:58:46-07    | 23:59:14-07
- 23:59:00-07    | @ 1 min                       | 00:00:00-07    | 23:58:00-07
- 23:59:00-07    | @ 5 hours                     | 04:59:00-07    | 18:59:00-07
- 23:59:00-07    | @ 1 day 2 hours 3 mins 4 secs | 02:02:04-07    | 21:55:56-07
- 23:59:00-07    | @ 10 days                     | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 3 mons                      | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 5 mons                      | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 5 mons 12 hours             | 11:59:00-07    | 11:59:00-07
- 23:59:00-07    | @ 6 years                     | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 34 years                    | 23:59:00-07    | 23:59:00-07
- 23:59:59.99-07 | @ 14 secs ago                 | 23:59:45.99-07 | 00:00:13.99-07
- 23:59:59.99-07 | @ 1 min                       | 00:00:59.99-07 | 23:58:59.99-07
- 23:59:59.99-07 | @ 5 hours                     | 04:59:59.99-07 | 18:59:59.99-07
- 23:59:59.99-07 | @ 1 day 2 hours 3 mins 4 secs | 02:03:03.99-07 | 21:56:55.99-07
- 23:59:59.99-07 | @ 10 days                     | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 3 mons                      | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 5 mons                      | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 5 mons 12 hours             | 11:59:59.99-07 | 11:59:59.99-07
- 23:59:59.99-07 | @ 6 years                     | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 34 years                    | 23:59:59.99-07 | 23:59:59.99-07
+       t        |        i        |      add       |    subtract    
+----------------+-----------------+----------------+----------------
+ 00:01:00-07    | -00:00:14       | 00:00:46-07    | 00:01:14-07
+ 00:01:00-07    | 00:01:00        | 00:02:00-07    | 00:00:00-07
+ 00:01:00-07    | 05:00:00        | 05:01:00-07    | 19:01:00-07
+ 00:01:00-07    | 1 day 02:03:04  | 02:04:04-07    | 21:57:56-07
+ 00:01:00-07    | 10 days         | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 3 mons          | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 5 mons          | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 5 mons 12:00:00 | 12:01:00-07    | 12:01:00-07
+ 00:01:00-07    | 6 years         | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 34 years        | 00:01:00-07    | 00:01:00-07
+ 01:00:00-07    | -00:00:14       | 00:59:46-07    | 01:00:14-07
+ 01:00:00-07    | 00:01:00        | 01:01:00-07    | 00:59:00-07
+ 01:00:00-07    | 05:00:00        | 06:00:00-07    | 20:00:00-07
+ 01:00:00-07    | 1 day 02:03:04  | 03:03:04-07    | 22:56:56-07
+ 01:00:00-07    | 10 days         | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 3 mons          | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 5 mons          | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 5 mons 12:00:00 | 13:00:00-07    | 13:00:00-07
+ 01:00:00-07    | 6 years         | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 34 years        | 01:00:00-07    | 01:00:00-07
+ 02:03:00-07    | -00:00:14       | 02:02:46-07    | 02:03:14-07
+ 02:03:00-07    | 00:01:00        | 02:04:00-07    | 02:02:00-07
+ 02:03:00-07    | 05:00:00        | 07:03:00-07    | 21:03:00-07
+ 02:03:00-07    | 1 day 02:03:04  | 04:06:04-07    | 23:59:56-07
+ 02:03:00-07    | 10 days         | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 3 mons          | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 5 mons          | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 5 mons 12:00:00 | 14:03:00-07    | 14:03:00-07
+ 02:03:00-07    | 6 years         | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 34 years        | 02:03:00-07    | 02:03:00-07
+ 08:08:00-04    | -00:00:14       | 08:07:46-04    | 08:08:14-04
+ 08:08:00-04    | 00:01:00        | 08:09:00-04    | 08:07:00-04
+ 08:08:00-04    | 05:00:00        | 13:08:00-04    | 03:08:00-04
+ 08:08:00-04    | 1 day 02:03:04  | 10:11:04-04    | 06:04:56-04
+ 08:08:00-04    | 10 days         | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 3 mons          | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 5 mons          | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 5 mons 12:00:00 | 20:08:00-04    | 20:08:00-04
+ 08:08:00-04    | 6 years         | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 34 years        | 08:08:00-04    | 08:08:00-04
+ 07:07:00-08    | -00:00:14       | 07:06:46-08    | 07:07:14-08
+ 07:07:00-08    | 00:01:00        | 07:08:00-08    | 07:06:00-08
+ 07:07:00-08    | 05:00:00        | 12:07:00-08    | 02:07:00-08
+ 07:07:00-08    | 1 day 02:03:04  | 09:10:04-08    | 05:03:56-08
+ 07:07:00-08    | 10 days         | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 3 mons          | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 5 mons          | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 5 mons 12:00:00 | 19:07:00-08    | 19:07:00-08
+ 07:07:00-08    | 6 years         | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 34 years        | 07:07:00-08    | 07:07:00-08
+ 11:59:00-07    | -00:00:14       | 11:58:46-07    | 11:59:14-07
+ 11:59:00-07    | 00:01:00        | 12:00:00-07    | 11:58:00-07
+ 11:59:00-07    | 05:00:00        | 16:59:00-07    | 06:59:00-07
+ 11:59:00-07    | 1 day 02:03:04  | 14:02:04-07    | 09:55:56-07
+ 11:59:00-07    | 10 days         | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 3 mons          | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 5 mons          | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 5 mons 12:00:00 | 23:59:00-07    | 23:59:00-07
+ 11:59:00-07    | 6 years         | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 34 years        | 11:59:00-07    | 11:59:00-07
+ 12:00:00-07    | -00:00:14       | 11:59:46-07    | 12:00:14-07
+ 12:00:00-07    | 00:01:00        | 12:01:00-07    | 11:59:00-07
+ 12:00:00-07    | 05:00:00        | 17:00:00-07    | 07:00:00-07
+ 12:00:00-07    | 1 day 02:03:04  | 14:03:04-07    | 09:56:56-07
+ 12:00:00-07    | 10 days         | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 3 mons          | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 5 mons          | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 5 mons 12:00:00 | 00:00:00-07    | 00:00:00-07
+ 12:00:00-07    | 6 years         | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 34 years        | 12:00:00-07    | 12:00:00-07
+ 12:01:00-07    | -00:00:14       | 12:00:46-07    | 12:01:14-07
+ 12:01:00-07    | 00:01:00        | 12:02:00-07    | 12:00:00-07
+ 12:01:00-07    | 05:00:00        | 17:01:00-07    | 07:01:00-07
+ 12:01:00-07    | 1 day 02:03:04  | 14:04:04-07    | 09:57:56-07
+ 12:01:00-07    | 10 days         | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 3 mons          | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 5 mons          | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 5 mons 12:00:00 | 00:01:00-07    | 00:01:00-07
+ 12:01:00-07    | 6 years         | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 34 years        | 12:01:00-07    | 12:01:00-07
+ 15:36:39-04    | -00:00:14       | 15:36:25-04    | 15:36:53-04
+ 15:36:39-04    | 00:01:00        | 15:37:39-04    | 15:35:39-04
+ 15:36:39-04    | 05:00:00        | 20:36:39-04    | 10:36:39-04
+ 15:36:39-04    | 1 day 02:03:04  | 17:39:43-04    | 13:33:35-04
+ 15:36:39-04    | 10 days         | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 3 mons          | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 5 mons          | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 5 mons 12:00:00 | 03:36:39-04    | 03:36:39-04
+ 15:36:39-04    | 6 years         | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 34 years        | 15:36:39-04    | 15:36:39-04
+ 15:36:39-05    | -00:00:14       | 15:36:25-05    | 15:36:53-05
+ 15:36:39-05    | 00:01:00        | 15:37:39-05    | 15:35:39-05
+ 15:36:39-05    | 05:00:00        | 20:36:39-05    | 10:36:39-05
+ 15:36:39-05    | 1 day 02:03:04  | 17:39:43-05    | 13:33:35-05
+ 15:36:39-05    | 10 days         | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 3 mons          | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 5 mons          | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 5 mons 12:00:00 | 03:36:39-05    | 03:36:39-05
+ 15:36:39-05    | 6 years         | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 34 years        | 15:36:39-05    | 15:36:39-05
+ 23:59:00-07    | -00:00:14       | 23:58:46-07    | 23:59:14-07
+ 23:59:00-07    | 00:01:00        | 00:00:00-07    | 23:58:00-07
+ 23:59:00-07    | 05:00:00        | 04:59:00-07    | 18:59:00-07
+ 23:59:00-07    | 1 day 02:03:04  | 02:02:04-07    | 21:55:56-07
+ 23:59:00-07    | 10 days         | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 3 mons          | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 5 mons          | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 5 mons 12:00:00 | 11:59:00-07    | 11:59:00-07
+ 23:59:00-07    | 6 years         | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 34 years        | 23:59:00-07    | 23:59:00-07
+ 23:59:59.99-07 | -00:00:14       | 23:59:45.99-07 | 00:00:13.99-07
+ 23:59:59.99-07 | 00:01:00        | 00:00:59.99-07 | 23:58:59.99-07
+ 23:59:59.99-07 | 05:00:00        | 04:59:59.99-07 | 18:59:59.99-07
+ 23:59:59.99-07 | 1 day 02:03:04  | 02:03:03.99-07 | 21:56:55.99-07
+ 23:59:59.99-07 | 10 days         | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 3 mons          | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 5 mons          | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 5 mons 12:00:00 | 11:59:59.99-07 | 11:59:59.99-07
+ 23:59:59.99-07 | 6 years         | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 34 years        | 23:59:59.99-07 | 23:59:59.99-07
 (120 rows)
 
 -- SQL9x OVERLAPS operator
@@ -1405,357 +1403,357 @@
   ORDER BY "timestamp";
  16 |          timestamp           
 ----+------------------------------
-    | Thu Jan 01 00:00:00 1970 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Wed Mar 15 02:14:05 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 12:14:03 2000 PST
-    | Wed Mar 15 13:14:02 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
-    | Sat Sep 22 18:19:20 2001 PDT
+    | Thu Jan 01 00:00:00 1970 -05
+    | Wed Feb 28 17:32:01 1996 -05
+    | Thu Feb 29 17:32:01 1996 -05
+    | Fri Mar 01 17:32:01 1996 -05
+    | Mon Dec 30 17:32:01 1996 -05
+    | Tue Dec 31 17:32:01 1996 -05
+    | Fri Dec 31 17:32:01 1999 -05
+    | Sat Jan 01 17:32:01 2000 -05
+    | Wed Mar 15 02:14:05 2000 -05
+    | Wed Mar 15 03:14:04 2000 -05
+    | Wed Mar 15 08:14:01 2000 -05
+    | Wed Mar 15 12:14:03 2000 -05
+    | Wed Mar 15 13:14:02 2000 -05
+    | Sun Dec 31 17:32:01 2000 -05
+    | Mon Jan 01 17:32:01 2001 -05
+    | Sat Sep 22 18:19:20 2001 -05
 (16 rows)
 
 SELECT '' AS "160", d.f1 AS "timestamp", t.f1 AS "interval", d.f1 + t.f1 AS plus
   FROM TEMP_TIMESTAMP d, INTERVAL_TBL t
   ORDER BY plus, "timestamp", "interval";
- 160 |          timestamp           |           interval            |             plus             
------+------------------------------+-------------------------------+------------------------------
-     | Thu Jan 01 00:00:00 1970 PST | @ 14 secs ago                 | Wed Dec 31 23:59:46 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 min                       | Thu Jan 01 00:01:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 hours                     | Thu Jan 01 05:00:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 day 2 hours 3 mins 4 secs | Fri Jan 02 02:03:04 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 10 days                     | Sun Jan 11 00:00:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 3 mons                      | Wed Apr 01 00:00:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons                      | Mon Jun 01 00:00:00 1970 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons 12 hours             | Mon Jun 01 12:00:00 1970 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 6 years                     | Thu Jan 01 00:00:00 1976 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 14 secs ago                 | Wed Feb 28 17:31:47 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 min                       | Wed Feb 28 17:33:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 hours                     | Wed Feb 28 22:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 14 secs ago                 | Thu Feb 29 17:31:47 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 min                       | Thu Feb 29 17:33:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Feb 29 19:35:05 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 hours                     | Thu Feb 29 22:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 14 secs ago                 | Fri Mar 01 17:31:47 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 min                       | Fri Mar 01 17:33:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Fri Mar 01 19:35:05 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 hours                     | Fri Mar 01 22:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Mar 02 19:35:05 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 10 days                     | Sat Mar 09 17:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 10 days                     | Sun Mar 10 17:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 10 days                     | Mon Mar 11 17:32:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 3 mons                      | Tue May 28 17:32:01 1996 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 3 mons                      | Wed May 29 17:32:01 1996 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 3 mons                      | Sat Jun 01 17:32:01 1996 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons                      | Sun Jul 28 17:32:01 1996 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours             | Mon Jul 29 05:32:01 1996 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons                      | Mon Jul 29 17:32:01 1996 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours             | Tue Jul 30 05:32:01 1996 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons                      | Thu Aug 01 17:32:01 1996 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours             | Fri Aug 02 05:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 14 secs ago                 | Mon Dec 30 17:31:47 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 min                       | Mon Dec 30 17:33:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 hours                     | Mon Dec 30 22:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 14 secs ago                 | Tue Dec 31 17:31:47 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 min                       | Tue Dec 31 17:33:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Dec 31 19:35:05 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 hours                     | Tue Dec 31 22:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Wed Jan 01 19:35:05 1997 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 10 days                     | Thu Jan 09 17:32:01 1997 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 10 days                     | Fri Jan 10 17:32:01 1997 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 3 mons                      | Sun Mar 30 17:32:01 1997 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 3 mons                      | Mon Mar 31 17:32:01 1997 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons                      | Fri May 30 17:32:01 1997 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours             | Sat May 31 05:32:01 1997 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons                      | Sat May 31 17:32:01 1997 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours             | Sun Jun 01 05:32:01 1997 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 14 secs ago                 | Fri Dec 31 17:31:47 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 min                       | Fri Dec 31 17:33:01 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 hours                     | Fri Dec 31 22:32:01 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 14 secs ago                 | Sat Jan 01 17:31:47 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 min                       | Sat Jan 01 17:33:01 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Jan 01 19:35:05 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 hours                     | Sat Jan 01 22:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Jan 02 19:35:05 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 10 days                     | Mon Jan 10 17:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 10 days                     | Tue Jan 11 17:32:01 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 14 secs ago                 | Wed Mar 15 02:13:51 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 min                       | Wed Mar 15 02:15:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 14 secs ago                 | Wed Mar 15 03:13:50 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 min                       | Wed Mar 15 03:15:04 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 hours                     | Wed Mar 15 07:14:05 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 14 secs ago                 | Wed Mar 15 08:13:47 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 hours                     | Wed Mar 15 08:14:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 min                       | Wed Mar 15 08:15:01 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 14 secs ago                 | Wed Mar 15 12:13:49 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 min                       | Wed Mar 15 12:15:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 14 secs ago                 | Wed Mar 15 13:13:48 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 hours                     | Wed Mar 15 13:14:01 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 min                       | Wed Mar 15 13:15:02 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 hours                     | Wed Mar 15 17:14:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 hours                     | Wed Mar 15 18:14:02 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 04:17:09 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 05:17:08 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 10:17:05 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 14:17:07 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 15:17:06 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 10 days                     | Sat Mar 25 02:14:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 10 days                     | Sat Mar 25 03:14:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 10 days                     | Sat Mar 25 08:14:01 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 10 days                     | Sat Mar 25 12:14:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 10 days                     | Sat Mar 25 13:14:02 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 3 mons                      | Fri Mar 31 17:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 3 mons                      | Sat Apr 01 17:32:01 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons                      | Wed May 31 17:32:01 2000 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours             | Thu Jun 01 05:32:01 2000 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons                      | Thu Jun 01 17:32:01 2000 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours             | Fri Jun 02 05:32:01 2000 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 3 mons                      | Thu Jun 15 02:14:05 2000 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 3 mons                      | Thu Jun 15 03:14:04 2000 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 3 mons                      | Thu Jun 15 08:14:01 2000 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 3 mons                      | Thu Jun 15 12:14:03 2000 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 3 mons                      | Thu Jun 15 13:14:02 2000 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons                      | Tue Aug 15 02:14:05 2000 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons                      | Tue Aug 15 03:14:04 2000 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons                      | Tue Aug 15 08:14:01 2000 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons                      | Tue Aug 15 12:14:03 2000 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons                      | Tue Aug 15 13:14:02 2000 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons 12 hours             | Tue Aug 15 14:14:05 2000 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours             | Tue Aug 15 15:14:04 2000 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours             | Tue Aug 15 20:14:01 2000 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons 12 hours             | Wed Aug 16 00:14:03 2000 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons 12 hours             | Wed Aug 16 01:14:02 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 14 secs ago                 | Sun Dec 31 17:31:47 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 min                       | Sun Dec 31 17:33:01 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 hours                     | Sun Dec 31 22:32:01 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 14 secs ago                 | Mon Jan 01 17:31:47 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 min                       | Mon Jan 01 17:33:01 2001 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Mon Jan 01 19:35:05 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 hours                     | Mon Jan 01 22:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Jan 02 19:35:05 2001 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 10 days                     | Wed Jan 10 17:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 10 days                     | Thu Jan 11 17:32:01 2001 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 3 mons                      | Sat Mar 31 17:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 3 mons                      | Sun Apr 01 17:32:01 2001 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons                      | Thu May 31 17:32:01 2001 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours             | Fri Jun 01 05:32:01 2001 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons                      | Fri Jun 01 17:32:01 2001 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours             | Sat Jun 02 05:32:01 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 14 secs ago                 | Sat Sep 22 18:19:06 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 min                       | Sat Sep 22 18:20:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 hours                     | Sat Sep 22 23:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 day 2 hours 3 mins 4 secs | Sun Sep 23 20:22:24 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 10 days                     | Tue Oct 02 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 3 mons                      | Sat Dec 22 18:19:20 2001 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons                      | Fri Feb 22 18:19:20 2002 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons 12 hours             | Sat Feb 23 06:19:20 2002 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 6 years                     | Thu Feb 28 17:32:01 2002 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 6 years                     | Thu Feb 28 17:32:01 2002 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 6 years                     | Fri Mar 01 17:32:01 2002 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 6 years                     | Mon Dec 30 17:32:01 2002 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 6 years                     | Tue Dec 31 17:32:01 2002 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 34 years                    | Thu Jan 01 00:00:00 2004 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 6 years                     | Sat Dec 31 17:32:01 2005 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 6 years                     | Sun Jan 01 17:32:01 2006 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 6 years                     | Wed Mar 15 02:14:05 2006 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 6 years                     | Wed Mar 15 03:14:04 2006 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 6 years                     | Wed Mar 15 08:14:01 2006 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 6 years                     | Wed Mar 15 12:14:03 2006 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 6 years                     | Wed Mar 15 13:14:02 2006 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 6 years                     | Sun Dec 31 17:32:01 2006 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 6 years                     | Mon Jan 01 17:32:01 2007 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 6 years                     | Sat Sep 22 18:19:20 2007 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 34 years                    | Thu Feb 28 17:32:01 2030 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 34 years                    | Thu Feb 28 17:32:01 2030 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 34 years                    | Fri Mar 01 17:32:01 2030 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 34 years                    | Mon Dec 30 17:32:01 2030 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 34 years                    | Tue Dec 31 17:32:01 2030 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 34 years                    | Sat Dec 31 17:32:01 2033 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 34 years                    | Sun Jan 01 17:32:01 2034 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 34 years                    | Wed Mar 15 02:14:05 2034 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 34 years                    | Wed Mar 15 03:14:04 2034 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 34 years                    | Wed Mar 15 08:14:01 2034 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 34 years                    | Wed Mar 15 12:14:03 2034 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 34 years                    | Wed Mar 15 13:14:02 2034 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 34 years                    | Sun Dec 31 17:32:01 2034 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 34 years                    | Mon Jan 01 17:32:01 2035 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 34 years                    | Sat Sep 22 18:19:20 2035 PDT
+ 160 |          timestamp           |    interval     |             plus             
+-----+------------------------------+-----------------+------------------------------
+     | Thu Jan 01 00:00:00 1970 -05 | -00:00:14       | Wed Dec 31 23:59:46 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 00:01:00        | Thu Jan 01 00:01:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 05:00:00        | Thu Jan 01 05:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 1 day 02:03:04  | Fri Jan 02 02:03:04 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 10 days         | Sun Jan 11 00:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 3 mons          | Wed Apr 01 00:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons          | Mon Jun 01 00:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons 12:00:00 | Mon Jun 01 12:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 6 years         | Thu Jan 01 00:00:00 1976 -05
+     | Wed Feb 28 17:32:01 1996 -05 | -00:00:14       | Wed Feb 28 17:31:47 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 00:01:00        | Wed Feb 28 17:33:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 05:00:00        | Wed Feb 28 22:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | -00:00:14       | Thu Feb 29 17:31:47 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 00:01:00        | Thu Feb 29 17:33:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 1 day 02:03:04  | Thu Feb 29 19:35:05 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 05:00:00        | Thu Feb 29 22:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | -00:00:14       | Fri Mar 01 17:31:47 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 00:01:00        | Fri Mar 01 17:33:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 1 day 02:03:04  | Fri Mar 01 19:35:05 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 05:00:00        | Fri Mar 01 22:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 1 day 02:03:04  | Sat Mar 02 19:35:05 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 10 days         | Sat Mar 09 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 10 days         | Sun Mar 10 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 10 days         | Mon Mar 11 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 3 mons          | Tue May 28 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 3 mons          | Wed May 29 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 3 mons          | Sat Jun 01 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons          | Sun Jul 28 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons 12:00:00 | Mon Jul 29 05:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons          | Mon Jul 29 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons 12:00:00 | Tue Jul 30 05:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons          | Thu Aug 01 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons 12:00:00 | Fri Aug 02 05:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | -00:00:14       | Mon Dec 30 17:31:47 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 00:01:00        | Mon Dec 30 17:33:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 05:00:00        | Mon Dec 30 22:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | -00:00:14       | Tue Dec 31 17:31:47 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 00:01:00        | Tue Dec 31 17:33:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 1 day 02:03:04  | Tue Dec 31 19:35:05 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 05:00:00        | Tue Dec 31 22:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 1 day 02:03:04  | Wed Jan 01 19:35:05 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 10 days         | Thu Jan 09 17:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 10 days         | Fri Jan 10 17:32:01 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 3 mons          | Sun Mar 30 17:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 3 mons          | Mon Mar 31 17:32:01 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons          | Fri May 30 17:32:01 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons 12:00:00 | Sat May 31 05:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons          | Sat May 31 17:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons 12:00:00 | Sun Jun 01 05:32:01 1997 -05
+     | Fri Dec 31 17:32:01 1999 -05 | -00:00:14       | Fri Dec 31 17:31:47 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 00:01:00        | Fri Dec 31 17:33:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 05:00:00        | Fri Dec 31 22:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | -00:00:14       | Sat Jan 01 17:31:47 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 00:01:00        | Sat Jan 01 17:33:01 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 1 day 02:03:04  | Sat Jan 01 19:35:05 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 05:00:00        | Sat Jan 01 22:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 1 day 02:03:04  | Sun Jan 02 19:35:05 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 10 days         | Mon Jan 10 17:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 10 days         | Tue Jan 11 17:32:01 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | -00:00:14       | Wed Mar 15 02:13:51 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 00:01:00        | Wed Mar 15 02:15:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | -00:00:14       | Wed Mar 15 03:13:50 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 00:01:00        | Wed Mar 15 03:15:04 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 05:00:00        | Wed Mar 15 07:14:05 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | -00:00:14       | Wed Mar 15 08:13:47 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 05:00:00        | Wed Mar 15 08:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 00:01:00        | Wed Mar 15 08:15:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | -00:00:14       | Wed Mar 15 12:13:49 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 00:01:00        | Wed Mar 15 12:15:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | -00:00:14       | Wed Mar 15 13:13:48 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 05:00:00        | Wed Mar 15 13:14:01 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 00:01:00        | Wed Mar 15 13:15:02 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 05:00:00        | Wed Mar 15 17:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 05:00:00        | Wed Mar 15 18:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 1 day 02:03:04  | Thu Mar 16 04:17:09 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 1 day 02:03:04  | Thu Mar 16 05:17:08 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 1 day 02:03:04  | Thu Mar 16 10:17:05 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 1 day 02:03:04  | Thu Mar 16 14:17:07 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 1 day 02:03:04  | Thu Mar 16 15:17:06 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 10 days         | Sat Mar 25 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 10 days         | Sat Mar 25 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 10 days         | Sat Mar 25 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 10 days         | Sat Mar 25 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 10 days         | Sat Mar 25 13:14:02 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 3 mons          | Fri Mar 31 17:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 3 mons          | Sat Apr 01 17:32:01 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons          | Wed May 31 17:32:01 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons 12:00:00 | Thu Jun 01 05:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons          | Thu Jun 01 17:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons 12:00:00 | Fri Jun 02 05:32:01 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 3 mons          | Thu Jun 15 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 3 mons          | Thu Jun 15 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 3 mons          | Thu Jun 15 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 3 mons          | Thu Jun 15 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 3 mons          | Thu Jun 15 13:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons          | Tue Aug 15 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons          | Tue Aug 15 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons          | Tue Aug 15 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons          | Tue Aug 15 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons          | Tue Aug 15 13:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons 12:00:00 | Tue Aug 15 14:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons 12:00:00 | Tue Aug 15 15:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons 12:00:00 | Tue Aug 15 20:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons 12:00:00 | Wed Aug 16 00:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons 12:00:00 | Wed Aug 16 01:14:02 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | -00:00:14       | Sun Dec 31 17:31:47 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 00:01:00        | Sun Dec 31 17:33:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 05:00:00        | Sun Dec 31 22:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | -00:00:14       | Mon Jan 01 17:31:47 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 00:01:00        | Mon Jan 01 17:33:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 1 day 02:03:04  | Mon Jan 01 19:35:05 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 05:00:00        | Mon Jan 01 22:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 1 day 02:03:04  | Tue Jan 02 19:35:05 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 10 days         | Wed Jan 10 17:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 10 days         | Thu Jan 11 17:32:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 3 mons          | Sat Mar 31 17:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 3 mons          | Sun Apr 01 17:32:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons          | Thu May 31 17:32:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons 12:00:00 | Fri Jun 01 05:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons          | Fri Jun 01 17:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons 12:00:00 | Sat Jun 02 05:32:01 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | -00:00:14       | Sat Sep 22 18:19:06 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 00:01:00        | Sat Sep 22 18:20:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 05:00:00        | Sat Sep 22 23:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 1 day 02:03:04  | Sun Sep 23 20:22:24 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 10 days         | Tue Oct 02 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 3 mons          | Sat Dec 22 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons          | Fri Feb 22 18:19:20 2002 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons 12:00:00 | Sat Feb 23 06:19:20 2002 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 6 years         | Thu Feb 28 17:32:01 2002 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 6 years         | Thu Feb 28 17:32:01 2002 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 6 years         | Fri Mar 01 17:32:01 2002 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 6 years         | Mon Dec 30 17:32:01 2002 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 6 years         | Tue Dec 31 17:32:01 2002 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 34 years        | Thu Jan 01 00:00:00 2004 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 6 years         | Sat Dec 31 17:32:01 2005 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 6 years         | Sun Jan 01 17:32:01 2006 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 6 years         | Wed Mar 15 02:14:05 2006 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 6 years         | Wed Mar 15 03:14:04 2006 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 6 years         | Wed Mar 15 08:14:01 2006 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 6 years         | Wed Mar 15 12:14:03 2006 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 6 years         | Wed Mar 15 13:14:02 2006 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 6 years         | Sun Dec 31 17:32:01 2006 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 6 years         | Mon Jan 01 17:32:01 2007 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 6 years         | Sat Sep 22 18:19:20 2007 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 34 years        | Thu Feb 28 17:32:01 2030 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 34 years        | Thu Feb 28 17:32:01 2030 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 34 years        | Fri Mar 01 17:32:01 2030 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 34 years        | Mon Dec 30 17:32:01 2030 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 34 years        | Tue Dec 31 17:32:01 2030 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 34 years        | Sat Dec 31 17:32:01 2033 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 34 years        | Sun Jan 01 17:32:01 2034 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 34 years        | Wed Mar 15 02:14:05 2034 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 34 years        | Wed Mar 15 03:14:04 2034 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 34 years        | Wed Mar 15 08:14:01 2034 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 34 years        | Wed Mar 15 12:14:03 2034 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 34 years        | Wed Mar 15 13:14:02 2034 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 34 years        | Sun Dec 31 17:32:01 2034 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 34 years        | Mon Jan 01 17:32:01 2035 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 34 years        | Sat Sep 22 18:19:20 2035 -05
 (160 rows)
 
 SELECT '' AS "160", d.f1 AS "timestamp", t.f1 AS "interval", d.f1 - t.f1 AS minus
   FROM TEMP_TIMESTAMP d, INTERVAL_TBL t
   WHERE isfinite(d.f1)
   ORDER BY minus, "timestamp", "interval";
- 160 |          timestamp           |           interval            |            minus             
------+------------------------------+-------------------------------+------------------------------
-     | Thu Jan 01 00:00:00 1970 PST | @ 34 years                    | Wed Jan 01 00:00:00 1936 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 34 years                    | Wed Feb 28 17:32:01 1962 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 34 years                    | Wed Feb 28 17:32:01 1962 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 34 years                    | Thu Mar 01 17:32:01 1962 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 34 years                    | Sun Dec 30 17:32:01 1962 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 34 years                    | Mon Dec 31 17:32:01 1962 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 6 years                     | Wed Jan 01 00:00:00 1964 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 34 years                    | Fri Dec 31 17:32:01 1965 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 34 years                    | Sat Jan 01 17:32:01 1966 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 34 years                    | Tue Mar 15 02:14:05 1966 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 34 years                    | Tue Mar 15 03:14:04 1966 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 34 years                    | Tue Mar 15 08:14:01 1966 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 34 years                    | Tue Mar 15 12:14:03 1966 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 34 years                    | Tue Mar 15 13:14:02 1966 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 34 years                    | Sat Dec 31 17:32:01 1966 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 34 years                    | Sun Jan 01 17:32:01 1967 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 34 years                    | Fri Sep 22 18:19:20 1967 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons 12 hours             | Thu Jul 31 12:00:00 1969 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons                      | Fri Aug 01 00:00:00 1969 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 3 mons                      | Wed Oct 01 00:00:00 1969 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 10 days                     | Mon Dec 22 00:00:00 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Dec 30 21:56:56 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 hours                     | Wed Dec 31 19:00:00 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 min                       | Wed Dec 31 23:59:00 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 14 secs ago                 | Thu Jan 01 00:00:14 1970 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 6 years                     | Wed Feb 28 17:32:01 1990 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 6 years                     | Wed Feb 28 17:32:01 1990 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 6 years                     | Thu Mar 01 17:32:01 1990 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 6 years                     | Sun Dec 30 17:32:01 1990 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 6 years                     | Mon Dec 31 17:32:01 1990 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 6 years                     | Fri Dec 31 17:32:01 1993 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 6 years                     | Sat Jan 01 17:32:01 1994 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 6 years                     | Tue Mar 15 02:14:05 1994 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 6 years                     | Tue Mar 15 03:14:04 1994 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 6 years                     | Tue Mar 15 08:14:01 1994 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 6 years                     | Tue Mar 15 12:14:03 1994 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 6 years                     | Tue Mar 15 13:14:02 1994 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 6 years                     | Sat Dec 31 17:32:01 1994 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 6 years                     | Sun Jan 01 17:32:01 1995 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 6 years                     | Fri Sep 22 18:19:20 1995 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours             | Thu Sep 28 05:32:01 1995 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons                      | Thu Sep 28 17:32:01 1995 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours             | Fri Sep 29 05:32:01 1995 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons                      | Fri Sep 29 17:32:01 1995 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours             | Sun Oct 01 05:32:01 1995 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons                      | Sun Oct 01 17:32:01 1995 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 3 mons                      | Tue Nov 28 17:32:01 1995 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 3 mons                      | Wed Nov 29 17:32:01 1995 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 3 mons                      | Fri Dec 01 17:32:01 1995 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 10 days                     | Sun Feb 18 17:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 10 days                     | Mon Feb 19 17:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 10 days                     | Tue Feb 20 17:32:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Feb 27 15:28:57 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 hours                     | Wed Feb 28 12:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Wed Feb 28 15:28:57 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 min                       | Wed Feb 28 17:31:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 14 secs ago                 | Wed Feb 28 17:32:15 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 hours                     | Thu Feb 29 12:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Feb 29 15:28:57 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 min                       | Thu Feb 29 17:31:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 14 secs ago                 | Thu Feb 29 17:32:15 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 hours                     | Fri Mar 01 12:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 min                       | Fri Mar 01 17:31:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 14 secs ago                 | Fri Mar 01 17:32:15 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours             | Tue Jul 30 05:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons                      | Tue Jul 30 17:32:01 1996 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours             | Wed Jul 31 05:32:01 1996 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons                      | Wed Jul 31 17:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 3 mons                      | Mon Sep 30 17:32:01 1996 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 3 mons                      | Mon Sep 30 17:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 10 days                     | Fri Dec 20 17:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 10 days                     | Sat Dec 21 17:32:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Dec 29 15:28:57 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 hours                     | Mon Dec 30 12:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Mon Dec 30 15:28:57 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 min                       | Mon Dec 30 17:31:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 14 secs ago                 | Mon Dec 30 17:32:15 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 hours                     | Tue Dec 31 12:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 min                       | Tue Dec 31 17:31:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 14 secs ago                 | Tue Dec 31 17:32:15 1996 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours             | Sat Jul 31 05:32:01 1999 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons                      | Sat Jul 31 17:32:01 1999 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours             | Sun Aug 01 05:32:01 1999 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons                      | Sun Aug 01 17:32:01 1999 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 3 mons                      | Thu Sep 30 17:32:01 1999 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 3 mons                      | Fri Oct 01 17:32:01 1999 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons 12 hours             | Thu Oct 14 14:14:05 1999 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours             | Thu Oct 14 15:14:04 1999 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours             | Thu Oct 14 20:14:01 1999 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons 12 hours             | Fri Oct 15 00:14:03 1999 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons 12 hours             | Fri Oct 15 01:14:02 1999 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons                      | Fri Oct 15 02:14:05 1999 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons                      | Fri Oct 15 03:14:04 1999 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons                      | Fri Oct 15 08:14:01 1999 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons                      | Fri Oct 15 12:14:03 1999 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons                      | Fri Oct 15 13:14:02 1999 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 3 mons                      | Wed Dec 15 02:14:05 1999 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 3 mons                      | Wed Dec 15 03:14:04 1999 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 3 mons                      | Wed Dec 15 08:14:01 1999 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 3 mons                      | Wed Dec 15 12:14:03 1999 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 3 mons                      | Wed Dec 15 13:14:02 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 10 days                     | Tue Dec 21 17:32:01 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 10 days                     | Wed Dec 22 17:32:01 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Dec 30 15:28:57 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 hours                     | Fri Dec 31 12:32:01 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Fri Dec 31 15:28:57 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 min                       | Fri Dec 31 17:31:01 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 14 secs ago                 | Fri Dec 31 17:32:15 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 hours                     | Sat Jan 01 12:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 min                       | Sat Jan 01 17:31:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 14 secs ago                 | Sat Jan 01 17:32:15 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 10 days                     | Sun Mar 05 02:14:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 10 days                     | Sun Mar 05 03:14:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 10 days                     | Sun Mar 05 08:14:01 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 10 days                     | Sun Mar 05 12:14:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 10 days                     | Sun Mar 05 13:14:02 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 00:11:01 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 01:11:00 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 06:10:57 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 10:10:59 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 11:10:58 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 hours                     | Tue Mar 14 21:14:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 hours                     | Tue Mar 14 22:14:04 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 min                       | Wed Mar 15 02:13:05 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 14 secs ago                 | Wed Mar 15 02:14:19 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 min                       | Wed Mar 15 03:13:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 hours                     | Wed Mar 15 03:14:01 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 14 secs ago                 | Wed Mar 15 03:14:18 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 hours                     | Wed Mar 15 07:14:03 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 min                       | Wed Mar 15 08:13:01 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 hours                     | Wed Mar 15 08:14:02 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 14 secs ago                 | Wed Mar 15 08:14:15 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 min                       | Wed Mar 15 12:13:03 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 14 secs ago                 | Wed Mar 15 12:14:17 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 min                       | Wed Mar 15 13:13:02 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 14 secs ago                 | Wed Mar 15 13:14:16 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours             | Mon Jul 31 05:32:01 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons                      | Mon Jul 31 17:32:01 2000 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours             | Tue Aug 01 05:32:01 2000 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons                      | Tue Aug 01 17:32:01 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 3 mons                      | Sat Sep 30 17:32:01 2000 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 3 mons                      | Sun Oct 01 17:32:01 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 10 days                     | Thu Dec 21 17:32:01 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 10 days                     | Fri Dec 22 17:32:01 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Dec 30 15:28:57 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 hours                     | Sun Dec 31 12:32:01 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Dec 31 15:28:57 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 min                       | Sun Dec 31 17:31:01 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 14 secs ago                 | Sun Dec 31 17:32:15 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 hours                     | Mon Jan 01 12:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 min                       | Mon Jan 01 17:31:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 14 secs ago                 | Mon Jan 01 17:32:15 2001 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons 12 hours             | Sun Apr 22 06:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons                      | Sun Apr 22 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 3 mons                      | Fri Jun 22 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 10 days                     | Wed Sep 12 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 day 2 hours 3 mins 4 secs | Fri Sep 21 16:16:16 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 hours                     | Sat Sep 22 13:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 min                       | Sat Sep 22 18:18:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 14 secs ago                 | Sat Sep 22 18:19:34 2001 PDT
+ 160 |          timestamp           |    interval     |            minus             
+-----+------------------------------+-----------------+------------------------------
+     | Thu Jan 01 00:00:00 1970 -05 | 34 years        | Wed Jan 01 00:00:00 1936 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 34 years        | Wed Feb 28 17:32:01 1962 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 34 years        | Wed Feb 28 17:32:01 1962 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 34 years        | Thu Mar 01 17:32:01 1962 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 34 years        | Sun Dec 30 17:32:01 1962 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 34 years        | Mon Dec 31 17:32:01 1962 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 6 years         | Wed Jan 01 00:00:00 1964 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 34 years        | Fri Dec 31 17:32:01 1965 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 34 years        | Sat Jan 01 17:32:01 1966 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 34 years        | Tue Mar 15 02:14:05 1966 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 34 years        | Tue Mar 15 03:14:04 1966 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 34 years        | Tue Mar 15 08:14:01 1966 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 34 years        | Tue Mar 15 12:14:03 1966 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 34 years        | Tue Mar 15 13:14:02 1966 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 34 years        | Sat Dec 31 17:32:01 1966 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 34 years        | Sun Jan 01 17:32:01 1967 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 34 years        | Fri Sep 22 18:19:20 1967 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons 12:00:00 | Thu Jul 31 12:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons          | Fri Aug 01 00:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 3 mons          | Wed Oct 01 00:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 10 days         | Mon Dec 22 00:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 1 day 02:03:04  | Tue Dec 30 21:56:56 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 05:00:00        | Wed Dec 31 19:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 00:01:00        | Wed Dec 31 23:59:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | -00:00:14       | Thu Jan 01 00:00:14 1970 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 6 years         | Wed Feb 28 17:32:01 1990 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 6 years         | Wed Feb 28 17:32:01 1990 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 6 years         | Thu Mar 01 17:32:01 1990 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 6 years         | Sun Dec 30 17:32:01 1990 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 6 years         | Mon Dec 31 17:32:01 1990 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 6 years         | Fri Dec 31 17:32:01 1993 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 6 years         | Sat Jan 01 17:32:01 1994 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 6 years         | Tue Mar 15 02:14:05 1994 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 6 years         | Tue Mar 15 03:14:04 1994 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 6 years         | Tue Mar 15 08:14:01 1994 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 6 years         | Tue Mar 15 12:14:03 1994 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 6 years         | Tue Mar 15 13:14:02 1994 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 6 years         | Sat Dec 31 17:32:01 1994 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 6 years         | Sun Jan 01 17:32:01 1995 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 6 years         | Fri Sep 22 18:19:20 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons 12:00:00 | Thu Sep 28 05:32:01 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons          | Thu Sep 28 17:32:01 1995 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons 12:00:00 | Fri Sep 29 05:32:01 1995 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons          | Fri Sep 29 17:32:01 1995 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons 12:00:00 | Sun Oct 01 05:32:01 1995 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons          | Sun Oct 01 17:32:01 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 3 mons          | Tue Nov 28 17:32:01 1995 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 3 mons          | Wed Nov 29 17:32:01 1995 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 3 mons          | Fri Dec 01 17:32:01 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 10 days         | Sun Feb 18 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 10 days         | Mon Feb 19 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 10 days         | Tue Feb 20 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 1 day 02:03:04  | Tue Feb 27 15:28:57 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 05:00:00        | Wed Feb 28 12:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 1 day 02:03:04  | Wed Feb 28 15:28:57 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 00:01:00        | Wed Feb 28 17:31:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | -00:00:14       | Wed Feb 28 17:32:15 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 05:00:00        | Thu Feb 29 12:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 1 day 02:03:04  | Thu Feb 29 15:28:57 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 00:01:00        | Thu Feb 29 17:31:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | -00:00:14       | Thu Feb 29 17:32:15 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 05:00:00        | Fri Mar 01 12:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 00:01:00        | Fri Mar 01 17:31:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | -00:00:14       | Fri Mar 01 17:32:15 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons 12:00:00 | Tue Jul 30 05:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons          | Tue Jul 30 17:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons 12:00:00 | Wed Jul 31 05:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons          | Wed Jul 31 17:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 3 mons          | Mon Sep 30 17:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 3 mons          | Mon Sep 30 17:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 10 days         | Fri Dec 20 17:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 10 days         | Sat Dec 21 17:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 1 day 02:03:04  | Sun Dec 29 15:28:57 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 05:00:00        | Mon Dec 30 12:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 1 day 02:03:04  | Mon Dec 30 15:28:57 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 00:01:00        | Mon Dec 30 17:31:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | -00:00:14       | Mon Dec 30 17:32:15 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 05:00:00        | Tue Dec 31 12:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 00:01:00        | Tue Dec 31 17:31:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | -00:00:14       | Tue Dec 31 17:32:15 1996 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons 12:00:00 | Sat Jul 31 05:32:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons          | Sat Jul 31 17:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons 12:00:00 | Sun Aug 01 05:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons          | Sun Aug 01 17:32:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 3 mons          | Thu Sep 30 17:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 3 mons          | Fri Oct 01 17:32:01 1999 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons 12:00:00 | Thu Oct 14 14:14:05 1999 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons 12:00:00 | Thu Oct 14 15:14:04 1999 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons 12:00:00 | Thu Oct 14 20:14:01 1999 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons 12:00:00 | Fri Oct 15 00:14:03 1999 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons 12:00:00 | Fri Oct 15 01:14:02 1999 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons          | Fri Oct 15 02:14:05 1999 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons          | Fri Oct 15 03:14:04 1999 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons          | Fri Oct 15 08:14:01 1999 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons          | Fri Oct 15 12:14:03 1999 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons          | Fri Oct 15 13:14:02 1999 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 3 mons          | Wed Dec 15 02:14:05 1999 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 3 mons          | Wed Dec 15 03:14:04 1999 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 3 mons          | Wed Dec 15 08:14:01 1999 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 3 mons          | Wed Dec 15 12:14:03 1999 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 3 mons          | Wed Dec 15 13:14:02 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 10 days         | Tue Dec 21 17:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 10 days         | Wed Dec 22 17:32:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 1 day 02:03:04  | Thu Dec 30 15:28:57 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 05:00:00        | Fri Dec 31 12:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 1 day 02:03:04  | Fri Dec 31 15:28:57 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 00:01:00        | Fri Dec 31 17:31:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | -00:00:14       | Fri Dec 31 17:32:15 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 05:00:00        | Sat Jan 01 12:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 00:01:00        | Sat Jan 01 17:31:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | -00:00:14       | Sat Jan 01 17:32:15 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 10 days         | Sun Mar 05 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 10 days         | Sun Mar 05 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 10 days         | Sun Mar 05 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 10 days         | Sun Mar 05 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 10 days         | Sun Mar 05 13:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 1 day 02:03:04  | Tue Mar 14 00:11:01 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 1 day 02:03:04  | Tue Mar 14 01:11:00 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 1 day 02:03:04  | Tue Mar 14 06:10:57 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 1 day 02:03:04  | Tue Mar 14 10:10:59 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 1 day 02:03:04  | Tue Mar 14 11:10:58 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 05:00:00        | Tue Mar 14 21:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 05:00:00        | Tue Mar 14 22:14:04 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 00:01:00        | Wed Mar 15 02:13:05 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | -00:00:14       | Wed Mar 15 02:14:19 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 00:01:00        | Wed Mar 15 03:13:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 05:00:00        | Wed Mar 15 03:14:01 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | -00:00:14       | Wed Mar 15 03:14:18 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 05:00:00        | Wed Mar 15 07:14:03 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 00:01:00        | Wed Mar 15 08:13:01 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 05:00:00        | Wed Mar 15 08:14:02 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | -00:00:14       | Wed Mar 15 08:14:15 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 00:01:00        | Wed Mar 15 12:13:03 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | -00:00:14       | Wed Mar 15 12:14:17 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 00:01:00        | Wed Mar 15 13:13:02 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | -00:00:14       | Wed Mar 15 13:14:16 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons 12:00:00 | Mon Jul 31 05:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons          | Mon Jul 31 17:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons 12:00:00 | Tue Aug 01 05:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons          | Tue Aug 01 17:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 3 mons          | Sat Sep 30 17:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 3 mons          | Sun Oct 01 17:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 10 days         | Thu Dec 21 17:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 10 days         | Fri Dec 22 17:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 1 day 02:03:04  | Sat Dec 30 15:28:57 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 05:00:00        | Sun Dec 31 12:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 1 day 02:03:04  | Sun Dec 31 15:28:57 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 00:01:00        | Sun Dec 31 17:31:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | -00:00:14       | Sun Dec 31 17:32:15 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 05:00:00        | Mon Jan 01 12:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 00:01:00        | Mon Jan 01 17:31:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | -00:00:14       | Mon Jan 01 17:32:15 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons 12:00:00 | Sun Apr 22 06:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons          | Sun Apr 22 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 3 mons          | Fri Jun 22 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 10 days         | Wed Sep 12 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 1 day 02:03:04  | Fri Sep 21 16:16:16 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 05:00:00        | Sat Sep 22 13:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 00:01:00        | Sat Sep 22 18:18:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | -00:00:14       | Sat Sep 22 18:19:34 2001 -05
 (160 rows)
 
 SELECT '' AS "16", d.f1 AS "timestamp",
@@ -1763,287 +1761,287 @@
    d.f1 - timestamp with time zone '1980-01-06 00:00 GMT' AS difference
   FROM TEMP_TIMESTAMP d
   ORDER BY difference;
- 16 |          timestamp           |         gpstime_zero         |             difference              
-----+------------------------------+------------------------------+-------------------------------------
-    | Thu Jan 01 00:00:00 1970 PST | Sat Jan 05 16:00:00 1980 PST | @ 3656 days 16 hours ago
-    | Wed Feb 28 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 5898 days 1 hour 32 mins 1 sec
-    | Thu Feb 29 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 5899 days 1 hour 32 mins 1 sec
-    | Fri Mar 01 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 5900 days 1 hour 32 mins 1 sec
-    | Mon Dec 30 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 6204 days 1 hour 32 mins 1 sec
-    | Tue Dec 31 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 6205 days 1 hour 32 mins 1 sec
-    | Fri Dec 31 17:32:01 1999 PST | Sat Jan 05 16:00:00 1980 PST | @ 7300 days 1 hour 32 mins 1 sec
-    | Sat Jan 01 17:32:01 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7301 days 1 hour 32 mins 1 sec
-    | Wed Mar 15 02:14:05 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 10 hours 14 mins 5 secs
-    | Wed Mar 15 03:14:04 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 11 hours 14 mins 4 secs
-    | Wed Mar 15 08:14:01 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 16 hours 14 mins 1 sec
-    | Wed Mar 15 12:14:03 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 20 hours 14 mins 3 secs
-    | Wed Mar 15 13:14:02 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 21 hours 14 mins 2 secs
-    | Sun Dec 31 17:32:01 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7666 days 1 hour 32 mins 1 sec
-    | Mon Jan 01 17:32:01 2001 PST | Sat Jan 05 16:00:00 1980 PST | @ 7667 days 1 hour 32 mins 1 sec
-    | Sat Sep 22 18:19:20 2001 PDT | Sat Jan 05 16:00:00 1980 PST | @ 7931 days 1 hour 19 mins 20 secs
+ 16 |          timestamp           |         gpstime_zero         |      difference      
+----+------------------------------+------------------------------+----------------------
+    | Thu Jan 01 00:00:00 1970 -05 | Sat Jan 05 19:00:00 1980 -05 | -3656 days -19:00:00
+    | Wed Feb 28 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 5897 days 22:32:01
+    | Thu Feb 29 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 5898 days 22:32:01
+    | Fri Mar 01 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 5899 days 22:32:01
+    | Mon Dec 30 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 6203 days 22:32:01
+    | Tue Dec 31 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 6204 days 22:32:01
+    | Fri Dec 31 17:32:01 1999 -05 | Sat Jan 05 19:00:00 1980 -05 | 7299 days 22:32:01
+    | Sat Jan 01 17:32:01 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7300 days 22:32:01
+    | Wed Mar 15 02:14:05 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 07:14:05
+    | Wed Mar 15 03:14:04 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 08:14:04
+    | Wed Mar 15 08:14:01 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 13:14:01
+    | Wed Mar 15 12:14:03 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 17:14:03
+    | Wed Mar 15 13:14:02 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 18:14:02
+    | Sun Dec 31 17:32:01 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7665 days 22:32:01
+    | Mon Jan 01 17:32:01 2001 -05 | Sat Jan 05 19:00:00 1980 -05 | 7666 days 22:32:01
+    | Sat Sep 22 18:19:20 2001 -05 | Sat Jan 05 19:00:00 1980 -05 | 7930 days 23:19:20
 (16 rows)
 
 SELECT '' AS "226", d1.f1 AS timestamp1, d2.f1 AS timestamp2, d1.f1 - d2.f1 AS difference
   FROM TEMP_TIMESTAMP d1, TEMP_TIMESTAMP d2
   ORDER BY timestamp1, timestamp2, difference;
- 226 |          timestamp1          |          timestamp2          |                difference                 
------+------------------------------+------------------------------+-------------------------------------------
-     | Thu Jan 01 00:00:00 1970 PST | Thu Jan 01 00:00:00 1970 PST | @ 0
-     | Thu Jan 01 00:00:00 1970 PST | Wed Feb 28 17:32:01 1996 PST | @ 9554 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Thu Feb 29 17:32:01 1996 PST | @ 9555 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Fri Mar 01 17:32:01 1996 PST | @ 9556 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Mon Dec 30 17:32:01 1996 PST | @ 9860 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Tue Dec 31 17:32:01 1996 PST | @ 9861 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Fri Dec 31 17:32:01 1999 PST | @ 10956 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Sat Jan 01 17:32:01 2000 PST | @ 10957 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 02:14:05 2000 PST | @ 11031 days 2 hours 14 mins 5 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 03:14:04 2000 PST | @ 11031 days 3 hours 14 mins 4 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 08:14:01 2000 PST | @ 11031 days 8 hours 14 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 12:14:03 2000 PST | @ 11031 days 12 hours 14 mins 3 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 13:14:02 2000 PST | @ 11031 days 13 hours 14 mins 2 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Sun Dec 31 17:32:01 2000 PST | @ 11322 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Mon Jan 01 17:32:01 2001 PST | @ 11323 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Sat Sep 22 18:19:20 2001 PDT | @ 11587 days 17 hours 19 mins 20 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9554 days 17 hours 32 mins 1 sec
-     | Wed Feb 28 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 0
-     | Wed Feb 28 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 1 day ago
-     | Wed Feb 28 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 2 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 306 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 307 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1402 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1403 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1476 days 8 hours 42 mins 4 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1476 days 9 hours 42 mins 3 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1476 days 14 hours 42 mins ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1476 days 18 hours 42 mins 2 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1476 days 19 hours 42 mins 1 sec ago
-     | Wed Feb 28 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1768 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1769 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 2032 days 23 hours 47 mins 19 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9555 days 17 hours 32 mins 1 sec
-     | Thu Feb 29 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 1 day
-     | Thu Feb 29 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 0
-     | Thu Feb 29 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 1 day ago
-     | Thu Feb 29 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 305 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 306 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1401 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1402 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1475 days 8 hours 42 mins 4 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1475 days 9 hours 42 mins 3 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1475 days 14 hours 42 mins ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1475 days 18 hours 42 mins 2 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1475 days 19 hours 42 mins 1 sec ago
-     | Thu Feb 29 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1767 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1768 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 2031 days 23 hours 47 mins 19 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9556 days 17 hours 32 mins 1 sec
-     | Fri Mar 01 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 2 days
-     | Fri Mar 01 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 1 day
-     | Fri Mar 01 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 0
-     | Fri Mar 01 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 304 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 305 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1400 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1401 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1474 days 8 hours 42 mins 4 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1474 days 9 hours 42 mins 3 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1474 days 14 hours 42 mins ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1474 days 18 hours 42 mins 2 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1474 days 19 hours 42 mins 1 sec ago
-     | Fri Mar 01 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1766 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1767 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 2030 days 23 hours 47 mins 19 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9860 days 17 hours 32 mins 1 sec
-     | Mon Dec 30 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 306 days
-     | Mon Dec 30 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 305 days
-     | Mon Dec 30 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 304 days
-     | Mon Dec 30 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 0
-     | Mon Dec 30 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 1 day ago
-     | Mon Dec 30 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1096 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1097 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1170 days 8 hours 42 mins 4 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1170 days 9 hours 42 mins 3 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1170 days 14 hours 42 mins ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1170 days 18 hours 42 mins 2 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1170 days 19 hours 42 mins 1 sec ago
-     | Mon Dec 30 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1462 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1463 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 1726 days 23 hours 47 mins 19 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9861 days 17 hours 32 mins 1 sec
-     | Tue Dec 31 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 307 days
-     | Tue Dec 31 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 306 days
-     | Tue Dec 31 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 305 days
-     | Tue Dec 31 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 1 day
-     | Tue Dec 31 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 0
-     | Tue Dec 31 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1095 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1096 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1169 days 8 hours 42 mins 4 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1169 days 9 hours 42 mins 3 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1169 days 14 hours 42 mins ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1169 days 18 hours 42 mins 2 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1169 days 19 hours 42 mins 1 sec ago
-     | Tue Dec 31 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1461 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1462 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 1725 days 23 hours 47 mins 19 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Thu Jan 01 00:00:00 1970 PST | @ 10956 days 17 hours 32 mins 1 sec
-     | Fri Dec 31 17:32:01 1999 PST | Wed Feb 28 17:32:01 1996 PST | @ 1402 days
-     | Fri Dec 31 17:32:01 1999 PST | Thu Feb 29 17:32:01 1996 PST | @ 1401 days
-     | Fri Dec 31 17:32:01 1999 PST | Fri Mar 01 17:32:01 1996 PST | @ 1400 days
-     | Fri Dec 31 17:32:01 1999 PST | Mon Dec 30 17:32:01 1996 PST | @ 1096 days
-     | Fri Dec 31 17:32:01 1999 PST | Tue Dec 31 17:32:01 1996 PST | @ 1095 days
-     | Fri Dec 31 17:32:01 1999 PST | Fri Dec 31 17:32:01 1999 PST | @ 0
-     | Fri Dec 31 17:32:01 1999 PST | Sat Jan 01 17:32:01 2000 PST | @ 1 day ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 02:14:05 2000 PST | @ 74 days 8 hours 42 mins 4 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 03:14:04 2000 PST | @ 74 days 9 hours 42 mins 3 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 08:14:01 2000 PST | @ 74 days 14 hours 42 mins ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 12:14:03 2000 PST | @ 74 days 18 hours 42 mins 2 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 13:14:02 2000 PST | @ 74 days 19 hours 42 mins 1 sec ago
-     | Fri Dec 31 17:32:01 1999 PST | Sun Dec 31 17:32:01 2000 PST | @ 366 days ago
-     | Fri Dec 31 17:32:01 1999 PST | Mon Jan 01 17:32:01 2001 PST | @ 367 days ago
-     | Fri Dec 31 17:32:01 1999 PST | Sat Sep 22 18:19:20 2001 PDT | @ 630 days 23 hours 47 mins 19 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 10957 days 17 hours 32 mins 1 sec
-     | Sat Jan 01 17:32:01 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1403 days
-     | Sat Jan 01 17:32:01 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1402 days
-     | Sat Jan 01 17:32:01 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1401 days
-     | Sat Jan 01 17:32:01 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1097 days
-     | Sat Jan 01 17:32:01 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1096 days
-     | Sat Jan 01 17:32:01 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 1 day
-     | Sat Jan 01 17:32:01 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 0
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 73 days 8 hours 42 mins 4 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 73 days 9 hours 42 mins 3 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 73 days 14 hours 42 mins ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 73 days 18 hours 42 mins 2 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 73 days 19 hours 42 mins 1 sec ago
-     | Sat Jan 01 17:32:01 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 365 days ago
-     | Sat Jan 01 17:32:01 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 366 days ago
-     | Sat Jan 01 17:32:01 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 629 days 23 hours 47 mins 19 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 2 hours 14 mins 5 secs
-     | Wed Mar 15 02:14:05 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 0
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 59 mins 59 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 5 hours 59 mins 56 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 9 hours 59 mins 58 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 10 hours 59 mins 57 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 15 hours 17 mins 56 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 15 hours 17 mins 56 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 15 hours 5 mins 15 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 3 hours 14 mins 4 secs
-     | Wed Mar 15 03:14:04 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 59 mins 59 secs
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 0
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 4 hours 59 mins 57 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 8 hours 59 mins 59 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 9 hours 59 mins 58 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 14 hours 17 mins 57 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 14 hours 17 mins 57 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 14 hours 5 mins 16 secs ago
-     | Wed Mar 15 08:14:01 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 8 hours 14 mins 1 sec
-     | Wed Mar 15 08:14:01 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 5 hours 59 mins 56 secs
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 4 hours 59 mins 57 secs
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 0
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 4 hours 2 secs ago
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 5 hours 1 sec ago
-     | Wed Mar 15 08:14:01 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 9 hours 18 mins ago
-     | Wed Mar 15 08:14:01 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 9 hours 18 mins ago
-     | Wed Mar 15 08:14:01 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 9 hours 5 mins 19 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 12 hours 14 mins 3 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 9 hours 59 mins 58 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 8 hours 59 mins 59 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 4 hours 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 0
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 59 mins 59 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 5 hours 17 mins 58 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 5 hours 17 mins 58 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 5 hours 5 mins 17 secs ago
-     | Wed Mar 15 13:14:02 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 13 hours 14 mins 2 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 10 hours 59 mins 57 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 9 hours 59 mins 58 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 5 hours 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 59 mins 59 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 0
-     | Wed Mar 15 13:14:02 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 4 hours 17 mins 59 secs ago
-     | Wed Mar 15 13:14:02 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 4 hours 17 mins 59 secs ago
-     | Wed Mar 15 13:14:02 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 4 hours 5 mins 18 secs ago
-     | Sun Dec 31 17:32:01 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11322 days 17 hours 32 mins 1 sec
-     | Sun Dec 31 17:32:01 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1768 days
-     | Sun Dec 31 17:32:01 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1767 days
-     | Sun Dec 31 17:32:01 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1766 days
-     | Sun Dec 31 17:32:01 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1462 days
-     | Sun Dec 31 17:32:01 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1461 days
-     | Sun Dec 31 17:32:01 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 366 days
-     | Sun Dec 31 17:32:01 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 365 days
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 291 days 15 hours 17 mins 56 secs
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 291 days 14 hours 17 mins 57 secs
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 291 days 9 hours 18 mins
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 291 days 5 hours 17 mins 58 secs
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 291 days 4 hours 17 mins 59 secs
-     | Sun Dec 31 17:32:01 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 0
-     | Sun Dec 31 17:32:01 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 1 day ago
-     | Sun Dec 31 17:32:01 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 264 days 23 hours 47 mins 19 secs ago
-     | Mon Jan 01 17:32:01 2001 PST | Thu Jan 01 00:00:00 1970 PST | @ 11323 days 17 hours 32 mins 1 sec
-     | Mon Jan 01 17:32:01 2001 PST | Wed Feb 28 17:32:01 1996 PST | @ 1769 days
-     | Mon Jan 01 17:32:01 2001 PST | Thu Feb 29 17:32:01 1996 PST | @ 1768 days
-     | Mon Jan 01 17:32:01 2001 PST | Fri Mar 01 17:32:01 1996 PST | @ 1767 days
-     | Mon Jan 01 17:32:01 2001 PST | Mon Dec 30 17:32:01 1996 PST | @ 1463 days
-     | Mon Jan 01 17:32:01 2001 PST | Tue Dec 31 17:32:01 1996 PST | @ 1462 days
-     | Mon Jan 01 17:32:01 2001 PST | Fri Dec 31 17:32:01 1999 PST | @ 367 days
-     | Mon Jan 01 17:32:01 2001 PST | Sat Jan 01 17:32:01 2000 PST | @ 366 days
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 02:14:05 2000 PST | @ 292 days 15 hours 17 mins 56 secs
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 03:14:04 2000 PST | @ 292 days 14 hours 17 mins 57 secs
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 08:14:01 2000 PST | @ 292 days 9 hours 18 mins
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 12:14:03 2000 PST | @ 292 days 5 hours 17 mins 58 secs
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 13:14:02 2000 PST | @ 292 days 4 hours 17 mins 59 secs
-     | Mon Jan 01 17:32:01 2001 PST | Sun Dec 31 17:32:01 2000 PST | @ 1 day
-     | Mon Jan 01 17:32:01 2001 PST | Mon Jan 01 17:32:01 2001 PST | @ 0
-     | Mon Jan 01 17:32:01 2001 PST | Sat Sep 22 18:19:20 2001 PDT | @ 263 days 23 hours 47 mins 19 secs ago
-     | Sat Sep 22 18:19:20 2001 PDT | Thu Jan 01 00:00:00 1970 PST | @ 11587 days 17 hours 19 mins 20 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Feb 28 17:32:01 1996 PST | @ 2032 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Thu Feb 29 17:32:01 1996 PST | @ 2031 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Fri Mar 01 17:32:01 1996 PST | @ 2030 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Mon Dec 30 17:32:01 1996 PST | @ 1726 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Tue Dec 31 17:32:01 1996 PST | @ 1725 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Fri Dec 31 17:32:01 1999 PST | @ 630 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Sat Jan 01 17:32:01 2000 PST | @ 629 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 02:14:05 2000 PST | @ 556 days 15 hours 5 mins 15 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 03:14:04 2000 PST | @ 556 days 14 hours 5 mins 16 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 08:14:01 2000 PST | @ 556 days 9 hours 5 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 12:14:03 2000 PST | @ 556 days 5 hours 5 mins 17 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 13:14:02 2000 PST | @ 556 days 4 hours 5 mins 18 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Sun Dec 31 17:32:01 2000 PST | @ 264 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Mon Jan 01 17:32:01 2001 PST | @ 263 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Sat Sep 22 18:19:20 2001 PDT | @ 0
+ 226 |          timestamp1          |          timestamp2          |      difference       
+-----+------------------------------+------------------------------+-----------------------
+     | Thu Jan 01 00:00:00 1970 -05 | Thu Jan 01 00:00:00 1970 -05 | 00:00:00
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Feb 28 17:32:01 1996 -05 | -9554 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Thu Feb 29 17:32:01 1996 -05 | -9555 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Fri Mar 01 17:32:01 1996 -05 | -9556 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Mon Dec 30 17:32:01 1996 -05 | -9860 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Tue Dec 31 17:32:01 1996 -05 | -9861 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Fri Dec 31 17:32:01 1999 -05 | -10956 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Sat Jan 01 17:32:01 2000 -05 | -10957 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 02:14:05 2000 -05 | -11031 days -02:14:05
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 03:14:04 2000 -05 | -11031 days -03:14:04
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 08:14:01 2000 -05 | -11031 days -08:14:01
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 12:14:03 2000 -05 | -11031 days -12:14:03
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 13:14:02 2000 -05 | -11031 days -13:14:02
+     | Thu Jan 01 00:00:00 1970 -05 | Sun Dec 31 17:32:01 2000 -05 | -11322 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Mon Jan 01 17:32:01 2001 -05 | -11323 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Sat Sep 22 18:19:20 2001 -05 | -11587 days -18:19:20
+     | Wed Feb 28 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9554 days 17:32:01
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 00:00:00
+     | Wed Feb 28 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | -1 days
+     | Wed Feb 28 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | -2 days
+     | Wed Feb 28 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | -306 days
+     | Wed Feb 28 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -307 days
+     | Wed Feb 28 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1402 days
+     | Wed Feb 28 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1403 days
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1476 days -08:42:04
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1476 days -09:42:03
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1476 days -14:42:00
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1476 days -18:42:02
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1476 days -19:42:01
+     | Wed Feb 28 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1768 days
+     | Wed Feb 28 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1769 days
+     | Wed Feb 28 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -2033 days -00:47:19
+     | Thu Feb 29 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9555 days 17:32:01
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 1 day
+     | Thu Feb 29 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 00:00:00
+     | Thu Feb 29 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | -1 days
+     | Thu Feb 29 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | -305 days
+     | Thu Feb 29 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -306 days
+     | Thu Feb 29 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1401 days
+     | Thu Feb 29 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1402 days
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1475 days -08:42:04
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1475 days -09:42:03
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1475 days -14:42:00
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1475 days -18:42:02
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1475 days -19:42:01
+     | Thu Feb 29 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1767 days
+     | Thu Feb 29 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1768 days
+     | Thu Feb 29 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -2032 days -00:47:19
+     | Fri Mar 01 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9556 days 17:32:01
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 2 days
+     | Fri Mar 01 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 1 day
+     | Fri Mar 01 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | 00:00:00
+     | Fri Mar 01 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | -304 days
+     | Fri Mar 01 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -305 days
+     | Fri Mar 01 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1400 days
+     | Fri Mar 01 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1401 days
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1474 days -08:42:04
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1474 days -09:42:03
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1474 days -14:42:00
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1474 days -18:42:02
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1474 days -19:42:01
+     | Fri Mar 01 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1766 days
+     | Fri Mar 01 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1767 days
+     | Fri Mar 01 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -2031 days -00:47:19
+     | Mon Dec 30 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9860 days 17:32:01
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 306 days
+     | Mon Dec 30 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 305 days
+     | Mon Dec 30 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | 304 days
+     | Mon Dec 30 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | 00:00:00
+     | Mon Dec 30 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -1 days
+     | Mon Dec 30 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1096 days
+     | Mon Dec 30 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1097 days
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1170 days -08:42:04
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1170 days -09:42:03
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1170 days -14:42:00
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1170 days -18:42:02
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1170 days -19:42:01
+     | Mon Dec 30 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1462 days
+     | Mon Dec 30 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1463 days
+     | Mon Dec 30 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -1727 days -00:47:19
+     | Tue Dec 31 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9861 days 17:32:01
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 307 days
+     | Tue Dec 31 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 306 days
+     | Tue Dec 31 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | 305 days
+     | Tue Dec 31 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | 1 day
+     | Tue Dec 31 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | 00:00:00
+     | Tue Dec 31 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1095 days
+     | Tue Dec 31 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1096 days
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1169 days -08:42:04
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1169 days -09:42:03
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1169 days -14:42:00
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1169 days -18:42:02
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1169 days -19:42:01
+     | Tue Dec 31 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1461 days
+     | Tue Dec 31 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1462 days
+     | Tue Dec 31 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -1726 days -00:47:19
+     | Fri Dec 31 17:32:01 1999 -05 | Thu Jan 01 00:00:00 1970 -05 | 10956 days 17:32:01
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Feb 28 17:32:01 1996 -05 | 1402 days
+     | Fri Dec 31 17:32:01 1999 -05 | Thu Feb 29 17:32:01 1996 -05 | 1401 days
+     | Fri Dec 31 17:32:01 1999 -05 | Fri Mar 01 17:32:01 1996 -05 | 1400 days
+     | Fri Dec 31 17:32:01 1999 -05 | Mon Dec 30 17:32:01 1996 -05 | 1096 days
+     | Fri Dec 31 17:32:01 1999 -05 | Tue Dec 31 17:32:01 1996 -05 | 1095 days
+     | Fri Dec 31 17:32:01 1999 -05 | Fri Dec 31 17:32:01 1999 -05 | 00:00:00
+     | Fri Dec 31 17:32:01 1999 -05 | Sat Jan 01 17:32:01 2000 -05 | -1 days
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 02:14:05 2000 -05 | -74 days -08:42:04
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 03:14:04 2000 -05 | -74 days -09:42:03
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 08:14:01 2000 -05 | -74 days -14:42:00
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 12:14:03 2000 -05 | -74 days -18:42:02
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 13:14:02 2000 -05 | -74 days -19:42:01
+     | Fri Dec 31 17:32:01 1999 -05 | Sun Dec 31 17:32:01 2000 -05 | -366 days
+     | Fri Dec 31 17:32:01 1999 -05 | Mon Jan 01 17:32:01 2001 -05 | -367 days
+     | Fri Dec 31 17:32:01 1999 -05 | Sat Sep 22 18:19:20 2001 -05 | -631 days -00:47:19
+     | Sat Jan 01 17:32:01 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 10957 days 17:32:01
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1403 days
+     | Sat Jan 01 17:32:01 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1402 days
+     | Sat Jan 01 17:32:01 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1401 days
+     | Sat Jan 01 17:32:01 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1097 days
+     | Sat Jan 01 17:32:01 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1096 days
+     | Sat Jan 01 17:32:01 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 1 day
+     | Sat Jan 01 17:32:01 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 00:00:00
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | -73 days -08:42:04
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | -73 days -09:42:03
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | -73 days -14:42:00
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -73 days -18:42:02
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -73 days -19:42:01
+     | Sat Jan 01 17:32:01 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -365 days
+     | Sat Jan 01 17:32:01 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -366 days
+     | Sat Jan 01 17:32:01 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -630 days -00:47:19
+     | Wed Mar 15 02:14:05 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 02:14:05
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 00:00:00
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | -00:59:59
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | -05:59:56
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -09:59:58
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -10:59:57
+     | Wed Mar 15 02:14:05 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -15:17:56
+     | Wed Mar 15 02:14:05 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -15:17:56
+     | Wed Mar 15 02:14:05 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -16:05:15
+     | Wed Mar 15 03:14:04 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 03:14:04
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 00:59:59
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 00:00:00
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | -04:59:57
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -08:59:59
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -09:59:58
+     | Wed Mar 15 03:14:04 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -14:17:57
+     | Wed Mar 15 03:14:04 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -14:17:57
+     | Wed Mar 15 03:14:04 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -15:05:16
+     | Wed Mar 15 08:14:01 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 08:14:01
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 05:59:56
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 04:59:57
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 00:00:00
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -04:00:02
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -05:00:01
+     | Wed Mar 15 08:14:01 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -09:18:00
+     | Wed Mar 15 08:14:01 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -09:18:00
+     | Wed Mar 15 08:14:01 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -10:05:19
+     | Wed Mar 15 12:14:03 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 12:14:03
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 09:59:58
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 08:59:59
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 04:00:02
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | 00:00:00
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -00:59:59
+     | Wed Mar 15 12:14:03 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -05:17:58
+     | Wed Mar 15 12:14:03 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -05:17:58
+     | Wed Mar 15 12:14:03 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -06:05:17
+     | Wed Mar 15 13:14:02 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 13:14:02
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 10:59:57
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 09:59:58
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 05:00:01
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | 00:59:59
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | 00:00:00
+     | Wed Mar 15 13:14:02 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -04:17:59
+     | Wed Mar 15 13:14:02 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -04:17:59
+     | Wed Mar 15 13:14:02 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -05:05:18
+     | Sun Dec 31 17:32:01 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11322 days 17:32:01
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1768 days
+     | Sun Dec 31 17:32:01 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1767 days
+     | Sun Dec 31 17:32:01 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1766 days
+     | Sun Dec 31 17:32:01 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1462 days
+     | Sun Dec 31 17:32:01 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1461 days
+     | Sun Dec 31 17:32:01 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 366 days
+     | Sun Dec 31 17:32:01 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 365 days
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 291 days 15:17:56
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 291 days 14:17:57
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 291 days 09:18:00
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | 291 days 05:17:58
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | 291 days 04:17:59
+     | Sun Dec 31 17:32:01 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | 00:00:00
+     | Sun Dec 31 17:32:01 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -1 days
+     | Sun Dec 31 17:32:01 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -265 days -00:47:19
+     | Mon Jan 01 17:32:01 2001 -05 | Thu Jan 01 00:00:00 1970 -05 | 11323 days 17:32:01
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Feb 28 17:32:01 1996 -05 | 1769 days
+     | Mon Jan 01 17:32:01 2001 -05 | Thu Feb 29 17:32:01 1996 -05 | 1768 days
+     | Mon Jan 01 17:32:01 2001 -05 | Fri Mar 01 17:32:01 1996 -05 | 1767 days
+     | Mon Jan 01 17:32:01 2001 -05 | Mon Dec 30 17:32:01 1996 -05 | 1463 days
+     | Mon Jan 01 17:32:01 2001 -05 | Tue Dec 31 17:32:01 1996 -05 | 1462 days
+     | Mon Jan 01 17:32:01 2001 -05 | Fri Dec 31 17:32:01 1999 -05 | 367 days
+     | Mon Jan 01 17:32:01 2001 -05 | Sat Jan 01 17:32:01 2000 -05 | 366 days
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 02:14:05 2000 -05 | 292 days 15:17:56
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 03:14:04 2000 -05 | 292 days 14:17:57
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 08:14:01 2000 -05 | 292 days 09:18:00
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 12:14:03 2000 -05 | 292 days 05:17:58
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 13:14:02 2000 -05 | 292 days 04:17:59
+     | Mon Jan 01 17:32:01 2001 -05 | Sun Dec 31 17:32:01 2000 -05 | 1 day
+     | Mon Jan 01 17:32:01 2001 -05 | Mon Jan 01 17:32:01 2001 -05 | 00:00:00
+     | Mon Jan 01 17:32:01 2001 -05 | Sat Sep 22 18:19:20 2001 -05 | -264 days -00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Thu Jan 01 00:00:00 1970 -05 | 11587 days 18:19:20
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Feb 28 17:32:01 1996 -05 | 2033 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Thu Feb 29 17:32:01 1996 -05 | 2032 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Fri Mar 01 17:32:01 1996 -05 | 2031 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Mon Dec 30 17:32:01 1996 -05 | 1727 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Tue Dec 31 17:32:01 1996 -05 | 1726 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Fri Dec 31 17:32:01 1999 -05 | 631 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Sat Jan 01 17:32:01 2000 -05 | 630 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 02:14:05 2000 -05 | 556 days 16:05:15
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 03:14:04 2000 -05 | 556 days 15:05:16
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 08:14:01 2000 -05 | 556 days 10:05:19
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 12:14:03 2000 -05 | 556 days 06:05:17
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 13:14:02 2000 -05 | 556 days 05:05:18
+     | Sat Sep 22 18:19:20 2001 -05 | Sun Dec 31 17:32:01 2000 -05 | 265 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Mon Jan 01 17:32:01 2001 -05 | 264 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Sat Sep 22 18:19:20 2001 -05 | 00:00:00
 (256 rows)
 
 --
@@ -2055,22 +2053,22 @@
   ORDER BY date, "timestamp";
  16 |          timestamp           |    date    
 ----+------------------------------+------------
-    | Thu Jan 01 00:00:00 1970 PST | 01-01-1970
-    | Wed Feb 28 17:32:01 1996 PST | 02-28-1996
-    | Thu Feb 29 17:32:01 1996 PST | 02-29-1996
-    | Fri Mar 01 17:32:01 1996 PST | 03-01-1996
-    | Mon Dec 30 17:32:01 1996 PST | 12-30-1996
-    | Tue Dec 31 17:32:01 1996 PST | 12-31-1996
-    | Fri Dec 31 17:32:01 1999 PST | 12-31-1999
-    | Sat Jan 01 17:32:01 2000 PST | 01-01-2000
-    | Wed Mar 15 02:14:05 2000 PST | 03-15-2000
-    | Wed Mar 15 03:14:04 2000 PST | 03-15-2000
-    | Wed Mar 15 08:14:01 2000 PST | 03-15-2000
-    | Wed Mar 15 12:14:03 2000 PST | 03-15-2000
-    | Wed Mar 15 13:14:02 2000 PST | 03-15-2000
-    | Sun Dec 31 17:32:01 2000 PST | 12-31-2000
-    | Mon Jan 01 17:32:01 2001 PST | 01-01-2001
-    | Sat Sep 22 18:19:20 2001 PDT | 09-22-2001
+    | Thu Jan 01 00:00:00 1970 -05 | 01-01-1970
+    | Wed Feb 28 17:32:01 1996 -05 | 02-28-1996
+    | Thu Feb 29 17:32:01 1996 -05 | 02-29-1996
+    | Fri Mar 01 17:32:01 1996 -05 | 03-01-1996
+    | Mon Dec 30 17:32:01 1996 -05 | 12-30-1996
+    | Tue Dec 31 17:32:01 1996 -05 | 12-31-1996
+    | Fri Dec 31 17:32:01 1999 -05 | 12-31-1999
+    | Sat Jan 01 17:32:01 2000 -05 | 01-01-2000
+    | Wed Mar 15 02:14:05 2000 -05 | 03-15-2000
+    | Wed Mar 15 03:14:04 2000 -05 | 03-15-2000
+    | Wed Mar 15 08:14:01 2000 -05 | 03-15-2000
+    | Wed Mar 15 12:14:03 2000 -05 | 03-15-2000
+    | Wed Mar 15 13:14:02 2000 -05 | 03-15-2000
+    | Sun Dec 31 17:32:01 2000 -05 | 12-31-2000
+    | Mon Jan 01 17:32:01 2001 -05 | 01-01-2001
+    | Sat Sep 22 18:19:20 2001 -05 | 09-22-2001
 (16 rows)
 
 DROP TABLE TEMP_TIMESTAMP;
@@ -2115,7 +2113,7 @@
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
+    | Thu Oct 02 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
@@ -2186,7 +2184,7 @@
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
-    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
@@ -2263,7 +2261,7 @@
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
-    | 02/10/1997 17:32:01
+    | 10/02/1997 17:32:01
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
@@ -2347,7 +2345,7 @@
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
-    | Mon 10 Feb 17:32:01 1997
+    | Thu 02 Oct 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
@@ -2425,7 +2423,7 @@
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
-    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
@@ -2503,7 +2501,7 @@
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
-    | 10/02/1997 17:32:01
+    | 02/10/1997 17:32:01
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
@@ -2550,384 +2548,384 @@
 SELECT to_timestamp('0097/Feb/16 --> 08:14:30', 'YYYY/Mon/DD --> HH:MI:SS');
          to_timestamp         
 ------------------------------
- Sat Feb 16 08:14:30 0097 PST
+ 0097-02-16 08:14:30-05:19:20
 (1 row)
 
 SELECT to_timestamp('97/2/16 8:14:30', 'FMYYYY/FMMM/FMDD FMHH:FMMI:FMSS');
          to_timestamp         
 ------------------------------
- Sat Feb 16 08:14:30 0097 PST
+ 0097-02-16 08:14:30-05:19:20
 (1 row)
 
 SELECT to_timestamp('2011$03!18 23_38_15', 'YYYY-MM-DD HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Fri Mar 18 23:38:15 2011 PDT
+      to_timestamp      
+------------------------
+ 2011-03-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('1985 January 12', 'YYYY FMMonth DD');
-         to_timestamp         
-------------------------------
- Sat Jan 12 00:00:00 1985 PST
+      to_timestamp      
+------------------------
+ 1985-01-12 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1985 FMMonth 12', 'YYYY "FMMonth" DD');
-         to_timestamp         
-------------------------------
- Sat Jan 12 00:00:00 1985 PST
+      to_timestamp      
+------------------------
+ 1985-01-12 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1985 \ 12', 'YYYY \\ DD');
-         to_timestamp         
-------------------------------
- Sat Jan 12 00:00:00 1985 PST
+      to_timestamp      
+------------------------
+ 1985-01-12 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('My birthday-> Year: 1976, Month: May, Day: 16',
                     '"My birthday-> Year:" YYYY, "Month:" FMMonth, "Day:" DD');
-         to_timestamp         
-------------------------------
- Sun May 16 00:00:00 1976 PDT
+      to_timestamp      
+------------------------
+ 1976-05-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1,582nd VIII 21', 'Y,YYYth FMRM DD');
          to_timestamp         
 ------------------------------
- Sat Aug 21 00:00:00 1582 PST
+ 1582-08-21 00:00:00-05:19:20
 (1 row)
 
 SELECT to_timestamp('15 "text between quote marks" 98 54 45',
                     E'HH24 "\\"text between quote marks\\"" YY MI SS');
-         to_timestamp         
-------------------------------
- Thu Jan 01 15:54:45 1998 PST
+      to_timestamp      
+------------------------
+ 1998-01-01 15:54:45-05
 (1 row)
 
 SELECT to_timestamp('05121445482000', 'MMDDHH24MISSYYYY');
-         to_timestamp         
-------------------------------
- Fri May 12 14:45:48 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-05-12 14:45:48-05
 (1 row)
 
 SELECT to_timestamp('2000January09Sunday', 'YYYYFMMonthDDFMDay');
-         to_timestamp         
-------------------------------
- Sun Jan 09 00:00:00 2000 PST
+      to_timestamp      
+------------------------
+ 2000-01-09 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('97/Feb/16', 'YYMonDD');
 ERROR:  invalid value "/Fe" for "Mon"
 DETAIL:  The given value did not match any of the allowed values for this field.
 SELECT to_timestamp('97/Feb/16', 'YY:Mon:DD');
-         to_timestamp         
-------------------------------
- Sun Feb 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-02-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('97/Feb/16', 'FXYY:Mon:DD');
-         to_timestamp         
-------------------------------
- Sun Feb 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-02-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('97/Feb/16', 'FXYY/Mon/DD');
-         to_timestamp         
-------------------------------
- Sun Feb 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-02-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('19971116', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Sun Nov 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('20000-1116', 'YYYY-MMDD');
-         to_timestamp          
--------------------------------
- Thu Nov 16 00:00:00 20000 PST
+      to_timestamp       
+-------------------------
+ 20000-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1997 AD 11 16', 'YYYY BC MM DD');
-         to_timestamp         
-------------------------------
- Sun Nov 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1997 BC 11 16', 'YYYY BC MM DD');
           to_timestamp           
 ---------------------------------
- Tue Nov 16 00:00:00 1997 PST BC
+ 1997-11-16 00:00:00-05:19:20 BC
 (1 row)
 
 SELECT to_timestamp('9-1116', 'Y-MMDD');
-         to_timestamp         
-------------------------------
- Mon Nov 16 00:00:00 2009 PST
+      to_timestamp      
+------------------------
+ 2009-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('95-1116', 'YY-MMDD');
-         to_timestamp         
-------------------------------
- Thu Nov 16 00:00:00 1995 PST
+      to_timestamp      
+------------------------
+ 1995-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('995-1116', 'YYY-MMDD');
-         to_timestamp         
-------------------------------
- Thu Nov 16 00:00:00 1995 PST
+      to_timestamp      
+------------------------
+ 1995-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005426', 'YYYYWWD');
-         to_timestamp         
-------------------------------
- Sat Oct 15 00:00:00 2005 PDT
+      to_timestamp      
+------------------------
+ 2005-10-15 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005300', 'YYYYDDD');
-         to_timestamp         
-------------------------------
- Thu Oct 27 00:00:00 2005 PDT
+      to_timestamp      
+------------------------
+ 2005-10-27 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005527', 'IYYYIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('005527', 'IYYIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('05527', 'IYIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('5527', 'IIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005364', 'IYYYIDDD');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('20050302', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005 03 02', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp(' 2005 03 02', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('  20050302', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 AM', 'YYYY-MM-DD HH12:MI PM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 11:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 11:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 PM', 'YYYY-MM-DD HH12:MI PM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 +05',    'YYYY-MM-DD HH12:MI TZH');
-         to_timestamp         
-------------------------------
- Sat Dec 17 22:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 01:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 -05',    'YYYY-MM-DD HH12:MI TZH');
-         to_timestamp         
-------------------------------
- Sun Dec 18 08:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 11:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 +05:20', 'YYYY-MM-DD HH12:MI TZH:TZM');
-         to_timestamp         
-------------------------------
- Sat Dec 17 22:18:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 01:18:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 -05:20', 'YYYY-MM-DD HH12:MI TZH:TZM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 08:58:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 11:58:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 20',     'YYYY-MM-DD HH12:MI TZM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 03:18:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 06:18:00-05
 (1 row)
 
 --
 -- Check handling of multiple spaces in format and/or input
 --
 SELECT to_timestamp('2011-12-18 23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18   23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD   HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2000+   JUN', 'YYYY/MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('  2000 +JUN', 'YYYY/MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp(' 2000 +JUN', 'YYYY//MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000  +JUN', 'YYYY//MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 + JUN', 'YYYY MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 ++ JUN', 'YYYY  MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 + + JUN', 'YYYY  MON');
 ERROR:  invalid value "+ J" for "MON"
 DETAIL:  The given value did not match any of the allowed values for this field.
 SELECT to_timestamp('2000 + + JUN', 'YYYY   MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 -10', 'YYYY TZH');
-         to_timestamp         
-------------------------------
- Sat Jan 01 02:00:00 2000 PST
+      to_timestamp      
+------------------------
+ 2000-01-01 05:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 -10', 'YYYY  TZH');
-         to_timestamp         
-------------------------------
- Fri Dec 31 06:00:00 1999 PST
+      to_timestamp      
+------------------------
+ 1999-12-31 09:00:00-05
 (1 row)
 
 SELECT to_date('2011 12  18', 'YYYY MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12  18', 'YYYY MM  DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12  18', 'YYYY MM   DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12 18', 'YYYY  MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011  12 18', 'YYYY  MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011   12 18', 'YYYY  MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12 18', 'YYYYxMMxDD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011x 12x 18', 'YYYYxMMxDD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 x12 x18', 'YYYYxMMxDD');
@@ -2970,9 +2968,9 @@
 SELECT to_timestamp('2016-06-13 15:50:60', 'YYYY-MM-DD HH24:MI:SS');
 ERROR:  date/time field value out of range: "2016-06-13 15:50:60"
 SELECT to_timestamp('2016-06-13 15:50:55', 'YYYY-MM-DD HH24:MI:SS');  -- ok
-         to_timestamp         
-------------------------------
- Mon Jun 13 15:50:55 2016 PDT
+      to_timestamp      
+------------------------
+ 2016-06-13 15:50:55-05
 (1 row)
 
 SELECT to_timestamp('2016-06-13 15:50:55', 'YYYY-MM-DD HH:MI:SS');
@@ -2983,17 +2981,17 @@
 SELECT to_timestamp('2016-02-30 15:50:55', 'YYYY-MM-DD HH24:MI:SS');
 ERROR:  date/time field value out of range: "2016-02-30 15:50:55"
 SELECT to_timestamp('2016-02-29 15:50:55', 'YYYY-MM-DD HH24:MI:SS');  -- ok
-         to_timestamp         
-------------------------------
- Mon Feb 29 15:50:55 2016 PST
+      to_timestamp      
+------------------------
+ 2016-02-29 15:50:55-05
 (1 row)
 
 SELECT to_timestamp('2015-02-29 15:50:55', 'YYYY-MM-DD HH24:MI:SS');
 ERROR:  date/time field value out of range: "2015-02-29 15:50:55"
 SELECT to_timestamp('2015-02-11 86000', 'YYYY-MM-DD SSSS');  -- ok
-         to_timestamp         
-------------------------------
- Wed Feb 11 23:53:20 2015 PST
+      to_timestamp      
+------------------------
+ 2015-02-11 23:53:20-05
 (1 row)
 
 SELECT to_timestamp('2015-02-11 86400', 'YYYY-MM-DD SSSS');
@@ -3005,7 +3003,7 @@
 SELECT to_date('2016-02-29', 'YYYY-MM-DD');  -- ok
   to_date   
 ------------
- 02-29-2016
+ 2016-02-29
 (1 row)
 
 SELECT to_date('2015-02-29', 'YYYY-MM-DD');
@@ -3013,7 +3011,7 @@
 SELECT to_date('2015 365', 'YYYY DDD');  -- ok
   to_date   
 ------------
- 12-31-2015
+ 2015-12-31
 (1 row)
 
 SELECT to_date('2015 366', 'YYYY DDD');
@@ -3021,13 +3019,13 @@
 SELECT to_date('2016 365', 'YYYY DDD');  -- ok
   to_date   
 ------------
- 12-30-2016
+ 2016-12-30
 (1 row)
 
 SELECT to_date('2016 366', 'YYYY DDD');  -- ok
   to_date   
 ------------
- 12-31-2016
+ 2016-12-31
 (1 row)
 
 SELECT to_date('2016 367', 'YYYY DDD');
@@ -3044,15 +3042,15 @@
 (1 row)
 
 SELECT '2012-12-12 12:00'::timestamptz;
-           timestamptz           
----------------------------------
- Wed Dec 12 12:00:00 2012 -01:30
+        timestamptz        
+---------------------------
+ 2012-12-12 12:00:00-01:30
 (1 row)
 
 SELECT '2012-12-12 12:00 America/New_York'::timestamptz;
-           timestamptz           
----------------------------------
- Wed Dec 12 15:30:00 2012 -01:30
+        timestamptz        
+---------------------------
+ 2012-12-12 15:30:00-01:30
 (1 row)
 
 SELECT to_char('2012-12-12 12:00'::timestamptz, 'YYYY-MM-DD HH:MI:SS TZ');
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/expressions.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/expressions.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/expressions.out	2019-08-12 14:55:05.422229943 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/expressions.out	2019-09-05 16:27:40.587714405 -0500
@@ -97,7 +97,7 @@
 -----------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: ((f1 >= '01-01-1997'::date) AND (f1 <= '01-01-1998'::date))
+         Filter: ((f1 >= '1997-01-01'::date) AND (f1 <= '1998-01-01'::date))
 (3 rows)
 
 select count(*) from date_tbl
@@ -114,7 +114,7 @@
 --------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: ((f1 < '01-01-1997'::date) OR (f1 > '01-01-1998'::date))
+         Filter: ((f1 < '1997-01-01'::date) OR (f1 > '1998-01-01'::date))
 (3 rows)
 
 select count(*) from date_tbl
@@ -131,7 +131,7 @@
 ----------------------------------------------------------------------------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: (((f1 >= '01-01-1997'::date) AND (f1 <= '01-01-1998'::date)) OR ((f1 >= '01-01-1998'::date) AND (f1 <= '01-01-1997'::date)))
+         Filter: (((f1 >= '1997-01-01'::date) AND (f1 <= '1998-01-01'::date)) OR ((f1 >= '1998-01-01'::date) AND (f1 <= '1997-01-01'::date)))
 (3 rows)
 
 select count(*) from date_tbl
@@ -148,7 +148,7 @@
 -----------------------------------------------------------------------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: (((f1 < '01-01-1997'::date) OR (f1 > '01-01-1998'::date)) AND ((f1 < '01-01-1998'::date) OR (f1 > '01-01-1997'::date)))
+         Filter: (((f1 < '1997-01-01'::date) OR (f1 > '1998-01-01'::date)) AND ((f1 < '1998-01-01'::date) OR (f1 > '1997-01-01'::date)))
 (3 rows)
 
 select count(*) from date_tbl
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/arrays.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/arrays.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/arrays.out	2019-07-12 13:20:36.181293455 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/arrays.out	2019-09-05 16:27:54.204873380 -0500
@@ -1450,9 +1450,9 @@
 (1 row)
 
 select '{0 second  ,0 second}'::interval[];
-   interval    
----------------
- {"@ 0","@ 0"}
+      interval       
+---------------------
+ {00:00:00,00:00:00}
 (1 row)
 
 select '{ { "," } , { 3 } }'::text[];
@@ -1471,9 +1471,9 @@
            0 second,
            @ 1 hour @ 42 minutes @ 20 seconds
          }'::interval[];
-              interval              
-------------------------------------
- {"@ 0","@ 1 hour 42 mins 20 secs"}
+      interval       
+---------------------
+ {00:00:00,01:42:20}
 (1 row)
 
 select array[]::text[];
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/generated.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/generated.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/generated.out	2019-08-12 14:55:05.426230282 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/generated.out	2019-09-05 16:27:57.745174679 -0500
@@ -556,13 +556,13 @@
 SELECT * FROM gtest_parent;
      f1     | f2 | f3 
 ------------+----+----
- 07-15-2016 |  1 |  2
+ 2016-07-15 |  1 |  2
 (1 row)
 
 SELECT * FROM gtest_child;
      f1     | f2 | f3 
 ------------+----+----
- 07-15-2016 |  1 |  2
+ 2016-07-15 |  1 |  2
 (1 row)
 
 DROP TABLE gtest_parent;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rules.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rules.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rules.out	2019-08-12 14:55:05.454232660 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rules.out	2019-09-05 16:28:10.722279028 -0500
@@ -1038,9 +1038,9 @@
                                     );
 UPDATE shoelace_data SET sl_avail = 6 WHERE  sl_name = 'sl7';
 SELECT * FROM shoelace_log;
-  sl_name   | sl_avail | log_who  |         log_when         
-------------+----------+----------+--------------------------
- sl7        |        6 | Al Bundy | Thu Jan 01 00:00:00 1970
+  sl_name   | sl_avail | log_who  |      log_when       
+------------+----------+----------+---------------------
+ sl7        |        6 | Al Bundy | 1970-01-01 00:00:00
 (1 row)
 
     CREATE RULE shoelace_ins AS ON INSERT TO shoelace
@@ -1108,12 +1108,12 @@
 (8 rows)
 
 SELECT * FROM shoelace_log ORDER BY sl_name;
-  sl_name   | sl_avail | log_who  |         log_when         
-------------+----------+----------+--------------------------
- sl3        |       10 | Al Bundy | Thu Jan 01 00:00:00 1970
- sl6        |       20 | Al Bundy | Thu Jan 01 00:00:00 1970
- sl7        |        6 | Al Bundy | Thu Jan 01 00:00:00 1970
- sl8        |       21 | Al Bundy | Thu Jan 01 00:00:00 1970
+  sl_name   | sl_avail | log_who  |      log_when       
+------------+----------+----------+---------------------
+ sl3        |       10 | Al Bundy | 1970-01-01 00:00:00
+ sl6        |       20 | Al Bundy | 1970-01-01 00:00:00
+ sl7        |        6 | Al Bundy | 1970-01-01 00:00:00
+ sl8        |       21 | Al Bundy | 1970-01-01 00:00:00
 (4 rows)
 
     CREATE VIEW shoelace_obsolete AS
@@ -2562,7 +2562,7 @@
 shoelace_data|log_shoelace|CREATE RULE log_shoelace AS
     ON UPDATE TO public.shoelace_data
    WHERE (new.sl_avail <> old.sl_avail) DO  INSERT INTO shoelace_log (sl_name, sl_avail, log_who, log_when)
-  VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, 'Thu Jan 01 00:00:00 1970'::timestamp without time zone);
+  VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, '1970-01-01 00:00:00'::timestamp without time zone);
 shoelace_ok|shoelace_ok_ins|CREATE RULE shoelace_ok_ins AS
     ON INSERT TO public.shoelace_ok DO INSTEAD  UPDATE shoelace SET sl_avail = (shoelace.sl_avail + new.ok_quant)
   WHERE (shoelace.sl_name = new.ok_name);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/psql.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/psql.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/psql.out	2019-08-12 14:55:15.923121444 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/psql.out	2019-09-05 16:28:09.502175203 -0500
@@ -252,7 +252,7 @@
 select '2000-01-01'::date as party_over
  party_over 
 ------------
- 01-01-2000
+ 2000-01-01
 (1 row)
 
 \unset FETCH_COUNT
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/select_views.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/select_views.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/select_views.out	2019-08-12 14:55:05.458232999 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/select_views.out	2019-09-05 16:28:16.334756614 -0500
@@ -1450,9 +1450,9 @@
 NOTICE:  f_leak => 1111-2222-3333-4444
  cid |     name      |       tel        |  passwd   |        cnum         | climit |    ymd     | usage 
 -----+---------------+------------------+-----------+---------------------+--------+------------+-------
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-05-2011 |    90
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-18-2011 |   110
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-21-2011 |   200
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-05 |    90
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-18 |   110
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-21 |   200
 (3 rows)
 
 EXPLAIN (COSTS OFF) SELECT * FROM my_credit_card_usage_normal
@@ -1462,7 +1462,7 @@
  Nested Loop
    Join Filter: (l.cid = r.cid)
    ->  Seq Scan on credit_usage r
-         Filter: ((ymd >= '10-01-2011'::date) AND (ymd < '11-01-2011'::date))
+         Filter: ((ymd >= '2011-10-01'::date) AND (ymd < '2011-11-01'::date))
    ->  Materialize
          ->  Subquery Scan on l
                Filter: f_leak(l.cnum)
@@ -1481,9 +1481,9 @@
 NOTICE:  f_leak => 1111-2222-3333-4444
  cid |     name      |       tel        |  passwd   |        cnum         | climit |    ymd     | usage 
 -----+---------------+------------------+-----------+---------------------+--------+------------+-------
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-05-2011 |    90
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-18-2011 |   110
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-21-2011 |   200
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-05 |    90
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-18 |   110
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-21 |   200
 (3 rows)
 
 EXPLAIN (COSTS OFF) SELECT * FROM my_credit_card_usage_secure
@@ -1495,7 +1495,7 @@
    ->  Nested Loop
          Join Filter: (l.cid = r.cid)
          ->  Seq Scan on credit_usage r
-               Filter: ((ymd >= '10-01-2011'::date) AND (ymd < '11-01-2011'::date))
+               Filter: ((ymd >= '2011-10-01'::date) AND (ymd < '2011-11-01'::date))
          ->  Materialize
                ->  Hash Join
                      Hash Cond: (r_1.cid = l.cid)
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/guc.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/guc.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/guc.out	2019-08-12 14:55:05.426230282 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/guc.out	2019-09-05 16:28:16.062733467 -0500
@@ -1,9 +1,9 @@
 -- pg_regress should ensure that this default value applies; however
 -- we can't rely on any specific default value of vacuum_cost_delay
 SHOW datestyle;
-   DateStyle   
----------------
- Postgres, MDY
+ DateStyle 
+-----------
+ ISO, DMY
 (1 row)
 
 -- SET to some nondefault value
@@ -24,7 +24,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL has no effect outside of a transaction
@@ -47,7 +47,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL within a transaction that commits
@@ -69,7 +69,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 08/13/2006 12:34:56 PDT
+ 08/13/2006 12:34:56 -05
 (1 row)
 
 COMMIT;
@@ -88,7 +88,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET should be reverted after ROLLBACK
@@ -110,7 +110,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 13.08.2006 12:34:56 PDT
+ 13.08.2006 12:34:56 -05
 (1 row)
 
 ROLLBACK;
@@ -129,7 +129,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- Some tests with subtransactions
@@ -145,7 +145,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT first_sp;
@@ -166,7 +166,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 13.08.2006 12:34:56 PDT
+ 13.08.2006 12:34:56 -05
 (1 row)
 
 ROLLBACK TO first_sp;
@@ -179,7 +179,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT second_sp;
@@ -194,7 +194,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 08/13/2006 12:34:56 PDT
+ 08/13/2006 12:34:56 -05
 (1 row)
 
 SAVEPOINT third_sp;
@@ -215,7 +215,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 ROLLBACK TO third_sp;
@@ -234,7 +234,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 08/13/2006 12:34:56 PDT
+ 08/13/2006 12:34:56 -05
 (1 row)
 
 ROLLBACK TO second_sp;
@@ -253,7 +253,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 ROLLBACK;
@@ -272,7 +272,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL with Savepoints
@@ -292,7 +292,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT sp;
@@ -313,7 +313,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 ROLLBACK TO sp;
@@ -332,7 +332,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 ROLLBACK;
@@ -351,7 +351,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL persists through RELEASE (which was not true in 8.0-8.2)
@@ -371,7 +371,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT sp;
@@ -392,7 +392,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 RELEASE SAVEPOINT sp;
@@ -411,7 +411,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 ROLLBACK;
@@ -430,7 +430,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET followed by SET LOCAL
@@ -454,7 +454,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 COMMIT;
@@ -473,7 +473,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 --
@@ -490,20 +490,20 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 RESET datestyle;
 SHOW datestyle;
-   DateStyle   
----------------
- Postgres, MDY
+ DateStyle 
+-----------
+ ISO, DMY
 (1 row)
 
 SELECT '2006-08-13 12:34:56'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+      timestamptz       
+------------------------
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- Test some simple error cases
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/foreign_data.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/foreign_data.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/foreign_data.out	2019-08-12 14:55:05.426230282 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/foreign_data.out	2019-09-05 16:28:18.230917960 -0500
@@ -728,7 +728,7 @@
  c3     | date    |           |          |         |                                | plain    |              | 
 Check constraints:
     "ft1_c2_check" CHECK (c2 <> ''::text)
-    "ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
+    "ft1_c3_check" CHECK (c3 >= '1994-01-01'::date AND c3 <= '1994-01-31'::date)
 Server: s0
 FDW options: (delimiter ',', quote '"', "be quoted" 'value')
 
@@ -849,7 +849,7 @@
  c10    | integer |           |          |         | (p1 'v1')                      | plain    |              | 
 Check constraints:
     "ft1_c2_check" CHECK (c2 <> ''::text)
-    "ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
+    "ft1_c3_check" CHECK (c3 >= '1994-01-01'::date AND c3 <= '1994-01-31'::date)
 Server: s0
 FDW options: (delimiter ',', quote '"', "be quoted" 'value')
 
@@ -897,7 +897,7 @@
  c10              | integer |           |          |         | (p1 'v1')
 Check constraints:
     "ft1_c2_check" CHECK (c2 <> ''::text)
-    "ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
+    "ft1_c3_check" CHECK (c3 >= '1994-01-01'::date AND c3 <= '1994-01-31'::date)
 Server: s0
 FDW options: (quote '~', "be quoted" 'value', escape '@')
 
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/window.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/window.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/window.out	2019-08-12 14:55:05.466233679 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/window.out	2019-09-05 16:28:16.630781803 -0500
@@ -1306,11 +1306,11 @@
 	SELECT i, min(i) over (order by i range between '1 day' preceding and '10 days' following) as min_i
   FROM generate_series(now(), now()+'100 days'::interval, '1 hour') i;
 SELECT pg_get_viewdef('v_window');
-                                                      pg_get_viewdef                                                       
----------------------------------------------------------------------------------------------------------------------------
-  SELECT i.i,                                                                                                             +
-     min(i.i) OVER (ORDER BY i.i RANGE BETWEEN '@ 1 day'::interval PRECEDING AND '@ 10 days'::interval FOLLOWING) AS min_i+
-    FROM generate_series(now(), (now() + '@ 100 days'::interval), '@ 1 hour'::interval) i(i);
+                                                    pg_get_viewdef                                                     
+-----------------------------------------------------------------------------------------------------------------------
+  SELECT i.i,                                                                                                         +
+     min(i.i) OVER (ORDER BY i.i RANGE BETWEEN '1 day'::interval PRECEDING AND '10 days'::interval FOLLOWING) AS min_i+
+    FROM generate_series(now(), (now() + '100 days'::interval), '01:00:00'::interval) i(i);
 (1 row)
 
 -- RANGE offset PRECEDING/FOLLOWING tests
@@ -1488,96 +1488,96 @@
 	salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 34900 |   5000 | 10-01-2006
- 34900 |   6000 | 10-01-2006
- 38400 |   3900 | 12-23-2006
- 47100 |   4800 | 08-01-2007
- 47100 |   5200 | 08-01-2007
- 47100 |   4800 | 08-08-2007
- 47100 |   5200 | 08-15-2007
- 36100 |   3500 | 12-10-2007
- 32200 |   4500 | 01-01-2008
- 32200 |   4200 | 01-01-2008
+ 34900 |   5000 | 2006-10-01
+ 34900 |   6000 | 2006-10-01
+ 38400 |   3900 | 2006-12-23
+ 47100 |   4800 | 2007-08-01
+ 47100 |   5200 | 2007-08-01
+ 47100 |   4800 | 2007-08-08
+ 47100 |   5200 | 2007-08-15
+ 36100 |   3500 | 2007-12-10
+ 32200 |   4500 | 2008-01-01
+ 32200 |   4200 | 2008-01-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date desc range between '1 year'::interval preceding and '1 year'::interval following),
 	salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 32200 |   4200 | 01-01-2008
- 32200 |   4500 | 01-01-2008
- 36100 |   3500 | 12-10-2007
- 47100 |   5200 | 08-15-2007
- 47100 |   4800 | 08-08-2007
- 47100 |   4800 | 08-01-2007
- 47100 |   5200 | 08-01-2007
- 38400 |   3900 | 12-23-2006
- 34900 |   5000 | 10-01-2006
- 34900 |   6000 | 10-01-2006
+ 32200 |   4200 | 2008-01-01
+ 32200 |   4500 | 2008-01-01
+ 36100 |   3500 | 2007-12-10
+ 47100 |   5200 | 2007-08-15
+ 47100 |   4800 | 2007-08-08
+ 47100 |   4800 | 2007-08-01
+ 47100 |   5200 | 2007-08-01
+ 38400 |   3900 | 2006-12-23
+ 34900 |   5000 | 2006-10-01
+ 34900 |   6000 | 2006-10-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date desc range between '1 year'::interval following and '1 year'::interval following),
 	salary, enroll_date from empsalary;
  sum | salary | enroll_date 
 -----+--------+-------------
-     |   4200 | 01-01-2008
-     |   4500 | 01-01-2008
-     |   3500 | 12-10-2007
-     |   5200 | 08-15-2007
-     |   4800 | 08-08-2007
-     |   4800 | 08-01-2007
-     |   5200 | 08-01-2007
-     |   3900 | 12-23-2006
-     |   5000 | 10-01-2006
-     |   6000 | 10-01-2006
+     |   4200 | 2008-01-01
+     |   4500 | 2008-01-01
+     |   3500 | 2007-12-10
+     |   5200 | 2007-08-15
+     |   4800 | 2007-08-08
+     |   4800 | 2007-08-01
+     |   5200 | 2007-08-01
+     |   3900 | 2006-12-23
+     |   5000 | 2006-10-01
+     |   6000 | 2006-10-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following
 	exclude current row), salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 29900 |   5000 | 10-01-2006
- 28900 |   6000 | 10-01-2006
- 34500 |   3900 | 12-23-2006
- 42300 |   4800 | 08-01-2007
- 41900 |   5200 | 08-01-2007
- 42300 |   4800 | 08-08-2007
- 41900 |   5200 | 08-15-2007
- 32600 |   3500 | 12-10-2007
- 27700 |   4500 | 01-01-2008
- 28000 |   4200 | 01-01-2008
+ 29900 |   5000 | 2006-10-01
+ 28900 |   6000 | 2006-10-01
+ 34500 |   3900 | 2006-12-23
+ 42300 |   4800 | 2007-08-01
+ 41900 |   5200 | 2007-08-01
+ 42300 |   4800 | 2007-08-08
+ 41900 |   5200 | 2007-08-15
+ 32600 |   3500 | 2007-12-10
+ 27700 |   4500 | 2008-01-01
+ 28000 |   4200 | 2008-01-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following
 	exclude group), salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 23900 |   5000 | 10-01-2006
- 23900 |   6000 | 10-01-2006
- 34500 |   3900 | 12-23-2006
- 37100 |   4800 | 08-01-2007
- 37100 |   5200 | 08-01-2007
- 42300 |   4800 | 08-08-2007
- 41900 |   5200 | 08-15-2007
- 32600 |   3500 | 12-10-2007
- 23500 |   4500 | 01-01-2008
- 23500 |   4200 | 01-01-2008
+ 23900 |   5000 | 2006-10-01
+ 23900 |   6000 | 2006-10-01
+ 34500 |   3900 | 2006-12-23
+ 37100 |   4800 | 2007-08-01
+ 37100 |   5200 | 2007-08-01
+ 42300 |   4800 | 2007-08-08
+ 41900 |   5200 | 2007-08-15
+ 32600 |   3500 | 2007-12-10
+ 23500 |   4500 | 2008-01-01
+ 23500 |   4200 | 2008-01-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following
 	exclude ties), salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 28900 |   5000 | 10-01-2006
- 29900 |   6000 | 10-01-2006
- 38400 |   3900 | 12-23-2006
- 41900 |   4800 | 08-01-2007
- 42300 |   5200 | 08-01-2007
- 47100 |   4800 | 08-08-2007
- 47100 |   5200 | 08-15-2007
- 36100 |   3500 | 12-10-2007
- 28000 |   4500 | 01-01-2008
- 27700 |   4200 | 01-01-2008
+ 28900 |   5000 | 2006-10-01
+ 29900 |   6000 | 2006-10-01
+ 38400 |   3900 | 2006-12-23
+ 41900 |   4800 | 2007-08-01
+ 42300 |   5200 | 2007-08-01
+ 47100 |   4800 | 2007-08-08
+ 47100 |   5200 | 2007-08-15
+ 36100 |   3500 | 2007-12-10
+ 28000 |   4500 | 2008-01-01
+ 27700 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by salary range between 1000 preceding and 1000 following),
@@ -1659,16 +1659,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        5000 |       5200 |   5000 | 10-01-2006
-        6000 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       4200 |   4500 | 01-01-2008
-        5000 |       4200 |   4200 | 01-01-2008
+        5000 |       5200 |   5000 | 2006-10-01
+        6000 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       4200 |   4500 | 2008-01-01
+        5000 |       4200 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following
@@ -1678,16 +1678,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        5000 |       5200 |   5000 | 10-01-2006
-        6000 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       4500 |   4500 | 01-01-2008
-        5000 |       4200 |   4200 | 01-01-2008
+        5000 |       5200 |   5000 | 2006-10-01
+        6000 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       4500 |   4500 | 2008-01-01
+        5000 |       4200 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following
@@ -1697,16 +1697,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        3900 |       5200 |   5000 | 10-01-2006
-        3900 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       3500 |   4500 | 01-01-2008
-        5000 |       3500 |   4200 | 01-01-2008
+        3900 |       5200 |   5000 | 2006-10-01
+        3900 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       3500 |   4500 | 2008-01-01
+        5000 |       3500 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following
@@ -1716,16 +1716,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        6000 |       5200 |   5000 | 10-01-2006
-        5000 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       4200 |   4500 | 01-01-2008
-        5000 |       4500 |   4200 | 01-01-2008
+        6000 |       5200 |   5000 | 2006-10-01
+        5000 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       4200 |   4500 | 2008-01-01
+        5000 |       4500 |   4200 | 2008-01-01
 (10 rows)
 
 -- RANGE offset PRECEDING/FOLLOWING with null values
@@ -2147,16 +2147,16 @@
              '1 year'::interval preceding and '1 year'::interval following);
  id | f_interval | first_value | last_value 
 ----+------------+-------------+------------
-  1 | @ 1 year   |           1 |          2
-  2 | @ 2 years  |           1 |          3
-  3 | @ 3 years  |           2 |          4
-  4 | @ 4 years  |           3 |          6
-  5 | @ 5 years  |           4 |          6
-  6 | @ 5 years  |           4 |          6
-  7 | @ 7 years  |           7 |          8
-  8 | @ 8 years  |           7 |          9
-  9 | @ 9 years  |           8 |         10
- 10 | @ 10 years |           9 |         10
+  1 | 1 year     |           1 |          2
+  2 | 2 years    |           1 |          3
+  3 | 3 years    |           2 |          4
+  4 | 4 years    |           3 |          6
+  5 | 5 years    |           4 |          6
+  6 | 5 years    |           4 |          6
+  7 | 7 years    |           7 |          8
+  8 | 8 years    |           7 |          9
+  9 | 9 years    |           8 |         10
+ 10 | 10 years   |           9 |         10
 (10 rows)
 
 select id, f_interval, first_value(id) over w, last_value(id) over w
@@ -2165,88 +2165,88 @@
              '1 year' preceding and '1 year' following);
  id | f_interval | first_value | last_value 
 ----+------------+-------------+------------
- 10 | @ 10 years |          10 |          9
-  9 | @ 9 years  |          10 |          8
-  8 | @ 8 years  |           9 |          7
-  7 | @ 7 years  |           8 |          7
-  6 | @ 5 years  |           6 |          4
-  5 | @ 5 years  |           6 |          4
-  4 | @ 4 years  |           6 |          3
-  3 | @ 3 years  |           4 |          2
-  2 | @ 2 years  |           3 |          1
-  1 | @ 1 year   |           2 |          1
+ 10 | 10 years   |          10 |          9
+  9 | 9 years    |          10 |          8
+  8 | 8 years    |           9 |          7
+  7 | 7 years    |           8 |          7
+  6 | 5 years    |           6 |          4
+  5 | 5 years    |           6 |          4
+  4 | 4 years    |           6 |          3
+  3 | 3 years    |           4 |          2
+  2 | 2 years    |           3 |          1
+  1 | 1 year     |           2 |          1
 (10 rows)
 
 select id, f_timestamptz, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamptz range between
              '1 year'::interval preceding and '1 year'::interval following);
- id |        f_timestamptz         | first_value | last_value 
-----+------------------------------+-------------+------------
-  1 | Thu Oct 19 02:23:54 2000 PDT |           1 |          3
-  2 | Fri Oct 19 02:23:54 2001 PDT |           1 |          4
-  3 | Fri Oct 19 02:23:54 2001 PDT |           1 |          4
-  4 | Sat Oct 19 02:23:54 2002 PDT |           2 |          5
-  5 | Sun Oct 19 02:23:54 2003 PDT |           4 |          6
-  6 | Tue Oct 19 02:23:54 2004 PDT |           5 |          7
-  7 | Wed Oct 19 02:23:54 2005 PDT |           6 |          8
-  8 | Thu Oct 19 02:23:54 2006 PDT |           7 |          9
-  9 | Fri Oct 19 02:23:54 2007 PDT |           8 |         10
- 10 | Sun Oct 19 02:23:54 2008 PDT |           9 |         10
+ id |     f_timestamptz      | first_value | last_value 
+----+------------------------+-------------+------------
+  1 | 2000-10-19 04:23:54-05 |           1 |          3
+  2 | 2001-10-19 04:23:54-05 |           1 |          4
+  3 | 2001-10-19 04:23:54-05 |           1 |          4
+  4 | 2002-10-19 04:23:54-05 |           2 |          5
+  5 | 2003-10-19 04:23:54-05 |           4 |          6
+  6 | 2004-10-19 04:23:54-05 |           5 |          7
+  7 | 2005-10-19 04:23:54-05 |           6 |          8
+  8 | 2006-10-19 04:23:54-05 |           7 |          9
+  9 | 2007-10-19 04:23:54-05 |           8 |         10
+ 10 | 2008-10-19 04:23:54-05 |           9 |         10
 (10 rows)
 
 select id, f_timestamptz, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamptz desc range between
              '1 year' preceding and '1 year' following);
- id |        f_timestamptz         | first_value | last_value 
-----+------------------------------+-------------+------------
- 10 | Sun Oct 19 02:23:54 2008 PDT |          10 |          9
-  9 | Fri Oct 19 02:23:54 2007 PDT |          10 |          8
-  8 | Thu Oct 19 02:23:54 2006 PDT |           9 |          7
-  7 | Wed Oct 19 02:23:54 2005 PDT |           8 |          6
-  6 | Tue Oct 19 02:23:54 2004 PDT |           7 |          5
-  5 | Sun Oct 19 02:23:54 2003 PDT |           6 |          4
-  4 | Sat Oct 19 02:23:54 2002 PDT |           5 |          2
-  3 | Fri Oct 19 02:23:54 2001 PDT |           4 |          1
-  2 | Fri Oct 19 02:23:54 2001 PDT |           4 |          1
-  1 | Thu Oct 19 02:23:54 2000 PDT |           3 |          1
+ id |     f_timestamptz      | first_value | last_value 
+----+------------------------+-------------+------------
+ 10 | 2008-10-19 04:23:54-05 |          10 |          9
+  9 | 2007-10-19 04:23:54-05 |          10 |          8
+  8 | 2006-10-19 04:23:54-05 |           9 |          7
+  7 | 2005-10-19 04:23:54-05 |           8 |          6
+  6 | 2004-10-19 04:23:54-05 |           7 |          5
+  5 | 2003-10-19 04:23:54-05 |           6 |          4
+  4 | 2002-10-19 04:23:54-05 |           5 |          2
+  3 | 2001-10-19 04:23:54-05 |           4 |          1
+  2 | 2001-10-19 04:23:54-05 |           4 |          1
+  1 | 2000-10-19 04:23:54-05 |           3 |          1
 (10 rows)
 
 select id, f_timestamp, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamp range between
              '1 year'::interval preceding and '1 year'::interval following);
- id |       f_timestamp        | first_value | last_value 
-----+--------------------------+-------------+------------
-  1 | Thu Oct 19 10:23:54 2000 |           1 |          3
-  2 | Fri Oct 19 10:23:54 2001 |           1 |          4
-  3 | Fri Oct 19 10:23:54 2001 |           1 |          4
-  4 | Sat Oct 19 10:23:54 2002 |           2 |          5
-  5 | Sun Oct 19 10:23:54 2003 |           4 |          6
-  6 | Tue Oct 19 10:23:54 2004 |           5 |          7
-  7 | Wed Oct 19 10:23:54 2005 |           6 |          8
-  8 | Thu Oct 19 10:23:54 2006 |           7 |          9
-  9 | Fri Oct 19 10:23:54 2007 |           8 |         10
- 10 | Sun Oct 19 10:23:54 2008 |           9 |         10
+ id |     f_timestamp     | first_value | last_value 
+----+---------------------+-------------+------------
+  1 | 2000-10-19 10:23:54 |           1 |          3
+  2 | 2001-10-19 10:23:54 |           1 |          4
+  3 | 2001-10-19 10:23:54 |           1 |          4
+  4 | 2002-10-19 10:23:54 |           2 |          5
+  5 | 2003-10-19 10:23:54 |           4 |          6
+  6 | 2004-10-19 10:23:54 |           5 |          7
+  7 | 2005-10-19 10:23:54 |           6 |          8
+  8 | 2006-10-19 10:23:54 |           7 |          9
+  9 | 2007-10-19 10:23:54 |           8 |         10
+ 10 | 2008-10-19 10:23:54 |           9 |         10
 (10 rows)
 
 select id, f_timestamp, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamp desc range between
              '1 year' preceding and '1 year' following);
- id |       f_timestamp        | first_value | last_value 
-----+--------------------------+-------------+------------
- 10 | Sun Oct 19 10:23:54 2008 |          10 |          9
-  9 | Fri Oct 19 10:23:54 2007 |          10 |          8
-  8 | Thu Oct 19 10:23:54 2006 |           9 |          7
-  7 | Wed Oct 19 10:23:54 2005 |           8 |          6
-  6 | Tue Oct 19 10:23:54 2004 |           7 |          5
-  5 | Sun Oct 19 10:23:54 2003 |           6 |          4
-  4 | Sat Oct 19 10:23:54 2002 |           5 |          2
-  3 | Fri Oct 19 10:23:54 2001 |           4 |          1
-  2 | Fri Oct 19 10:23:54 2001 |           4 |          1
-  1 | Thu Oct 19 10:23:54 2000 |           3 |          1
+ id |     f_timestamp     | first_value | last_value 
+----+---------------------+-------------+------------
+ 10 | 2008-10-19 10:23:54 |          10 |          9
+  9 | 2007-10-19 10:23:54 |          10 |          8
+  8 | 2006-10-19 10:23:54 |           9 |          7
+  7 | 2005-10-19 10:23:54 |           8 |          6
+  6 | 2004-10-19 10:23:54 |           7 |          5
+  5 | 2003-10-19 10:23:54 |           6 |          4
+  4 | 2002-10-19 10:23:54 |           5 |          2
+  3 | 2001-10-19 10:23:54 |           4 |          1
+  2 | 2001-10-19 10:23:54 |           4 |          1
+  1 | 2000-10-19 10:23:54 |           3 |          1
 (10 rows)
 
 -- RANGE offset PRECEDING/FOLLOWING error cases
@@ -2565,16 +2565,16 @@
 	salary, enroll_date from empsalary;
  first_value | lead | nth_value | salary | enroll_date 
 -------------+------+-----------+--------+-------------
-        5000 | 6000 |      5000 |   5000 | 10-01-2006
-        5000 | 3900 |      5000 |   6000 | 10-01-2006
-        5000 | 4800 |      5000 |   3900 | 12-23-2006
-        3900 | 5200 |      3900 |   4800 | 08-01-2007
-        3900 | 4800 |      3900 |   5200 | 08-01-2007
-        4800 | 5200 |      4800 |   4800 | 08-08-2007
-        4800 | 3500 |      4800 |   5200 | 08-15-2007
-        5200 | 4500 |      5200 |   3500 | 12-10-2007
-        3500 | 4200 |      3500 |   4500 | 01-01-2008
-        3500 |      |      3500 |   4200 | 01-01-2008
+        5000 | 6000 |      5000 |   5000 | 2006-10-01
+        5000 | 3900 |      5000 |   6000 | 2006-10-01
+        5000 | 4800 |      5000 |   3900 | 2006-12-23
+        3900 | 5200 |      3900 |   4800 | 2007-08-01
+        3900 | 4800 |      3900 |   5200 | 2007-08-01
+        4800 | 5200 |      4800 |   4800 | 2007-08-08
+        4800 | 3500 |      4800 |   5200 | 2007-08-15
+        5200 | 4500 |      5200 |   3500 | 2007-12-10
+        3500 | 4200 |      3500 |   4500 | 2008-01-01
+        3500 |      |      3500 |   4200 | 2008-01-01
 (10 rows)
 
 select last_value(salary) over(order by enroll_date groups between 1 preceding and 1 following),
@@ -2582,16 +2582,16 @@
 	salary, enroll_date from empsalary;
  last_value | lag  | salary | enroll_date 
 ------------+------+--------+-------------
-       3900 |      |   5000 | 10-01-2006
-       3900 | 5000 |   6000 | 10-01-2006
-       5200 | 6000 |   3900 | 12-23-2006
-       4800 | 3900 |   4800 | 08-01-2007
-       4800 | 4800 |   5200 | 08-01-2007
-       5200 | 5200 |   4800 | 08-08-2007
-       3500 | 4800 |   5200 | 08-15-2007
-       4200 | 5200 |   3500 | 12-10-2007
-       4200 | 3500 |   4500 | 01-01-2008
-       4200 | 4500 |   4200 | 01-01-2008
+       3900 |      |   5000 | 2006-10-01
+       3900 | 5000 |   6000 | 2006-10-01
+       5200 | 6000 |   3900 | 2006-12-23
+       4800 | 3900 |   4800 | 2007-08-01
+       4800 | 4800 |   5200 | 2007-08-01
+       5200 | 5200 |   4800 | 2007-08-08
+       3500 | 4800 |   5200 | 2007-08-15
+       4200 | 5200 |   3500 | 2007-12-10
+       4200 | 3500 |   4500 | 2008-01-01
+       4200 | 4500 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date groups between 1 following and 3 following
@@ -2602,16 +2602,16 @@
 	salary, enroll_date from empsalary;
  first_value | lead | nth_value | salary | enroll_date 
 -------------+------+-----------+--------+-------------
-        3900 | 6000 |      3900 |   5000 | 10-01-2006
-        3900 | 3900 |      3900 |   6000 | 10-01-2006
-        4800 | 4800 |      4800 |   3900 | 12-23-2006
-        4800 | 5200 |      4800 |   4800 | 08-01-2007
-        4800 | 4800 |      4800 |   5200 | 08-01-2007
-        5200 | 5200 |      5200 |   4800 | 08-08-2007
-        3500 | 3500 |      3500 |   5200 | 08-15-2007
-        4500 | 4500 |      4500 |   3500 | 12-10-2007
-             | 4200 |           |   4500 | 01-01-2008
-             |      |           |   4200 | 01-01-2008
+        3900 | 6000 |      3900 |   5000 | 2006-10-01
+        3900 | 3900 |      3900 |   6000 | 2006-10-01
+        4800 | 4800 |      4800 |   3900 | 2006-12-23
+        4800 | 5200 |      4800 |   4800 | 2007-08-01
+        4800 | 4800 |      4800 |   5200 | 2007-08-01
+        5200 | 5200 |      5200 |   4800 | 2007-08-08
+        3500 | 3500 |      3500 |   5200 | 2007-08-15
+        4500 | 4500 |      4500 |   3500 | 2007-12-10
+             | 4200 |           |   4500 | 2008-01-01
+             |      |           |   4200 | 2008-01-01
 (10 rows)
 
 select last_value(salary) over(order by enroll_date groups between 1 following and 3 following
@@ -2620,16 +2620,16 @@
 	salary, enroll_date from empsalary;
  last_value | lag  | salary | enroll_date 
 ------------+------+--------+-------------
-       4800 |      |   5000 | 10-01-2006
-       4800 | 5000 |   6000 | 10-01-2006
-       5200 | 6000 |   3900 | 12-23-2006
-       3500 | 3900 |   4800 | 08-01-2007
-       3500 | 4800 |   5200 | 08-01-2007
-       4200 | 5200 |   4800 | 08-08-2007
-       4200 | 4800 |   5200 | 08-15-2007
-       4200 | 5200 |   3500 | 12-10-2007
-            | 3500 |   4500 | 01-01-2008
-            | 4500 |   4200 | 01-01-2008
+       4800 |      |   5000 | 2006-10-01
+       4800 | 5000 |   6000 | 2006-10-01
+       5200 | 6000 |   3900 | 2006-12-23
+       3500 | 3900 |   4800 | 2007-08-01
+       3500 | 4800 |   5200 | 2007-08-01
+       4200 | 5200 |   4800 | 2007-08-08
+       4200 | 4800 |   5200 | 2007-08-15
+       4200 | 5200 |   3500 | 2007-12-10
+            | 3500 |   4500 | 2008-01-01
+            | 4500 |   4200 | 2008-01-01
 (10 rows)
 
 -- Show differences in offset interpretation between ROWS, RANGE, and GROUPS
@@ -3382,8 +3382,8 @@
   FROM (VALUES(1,'1 sec'),(2,'2 sec'),(3,NULL),(4,NULL)) t(i,v);
  i |    avg     
 ---+------------
- 1 | @ 1.5 secs
- 2 | @ 2 secs
+ 1 | 00:00:01.5
+ 2 | 00:00:02
  3 | 
  4 | 
 (4 rows)
@@ -3432,8 +3432,8 @@
   FROM (VALUES(1,'1 sec'),(2,'2 sec'),(3,NULL),(4,NULL)) t(i,v);
  i |   sum    
 ---+----------
- 1 | @ 3 secs
- 2 | @ 2 secs
+ 1 | 00:00:03
+ 2 | 00:00:02
  3 | 
  4 | 
 (4 rows)
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/json.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/json.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/json.out	2019-09-02 18:21:49.555379953 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/json.out	2019-09-05 16:28:19.999068412 -0500
@@ -1351,9 +1351,9 @@
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 select * from json_populate_record(null::jpop,'{"a":"blurfl","x":43.2}') q;
@@ -1363,9 +1363,9 @@
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 select * from json_populate_record(null::jpop,'{"a":[100,200,false],"x":43.2}') q;
@@ -1375,17 +1375,17 @@
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"a":[100,200,false],"x":43.2}') q;
-        a        | b |            c             
------------------+---+--------------------------
- [100,200,false] | 3 | Mon Dec 31 15:30:56 2012
+        a        | b |          c          
+-----------------+---+---------------------
+ [100,200,false] | 3 | 2012-12-31 15:30:56
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"c":[100,200,false],"x":43.2}') q;
 ERROR:  invalid input syntax for type timestamp: "[100,200,false]"
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{}') q;
- a | b |            c             
----+---+--------------------------
- x | 3 | Mon Dec 31 15:30:56 2012
+ a | b |          c          
+---+---+---------------------
+ x | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT i FROM json_populate_record(NULL::jsrec_i_not_null, '{"x": 43.2}') q;
@@ -1702,15 +1702,15 @@
 SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": [1, 2]}') q;
 ERROR:  cannot call populate_composite on an array
 SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}') q;
-                rec                
------------------------------------
- (abc,,"Thu Jan 02 00:00:00 2003")
+             rec              
+------------------------------
+ (abc,,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": "(abc,42,01.02.2003)"}') q;
-                 rec                 
--------------------------------------
- (abc,42,"Thu Jan 02 00:00:00 2003")
+              rec               
+--------------------------------
+ (abc,42,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": 123}') q;
@@ -1719,21 +1719,21 @@
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": [1, 2]}') q;
 ERROR:  cannot call populate_composite on a scalar
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q;
-                          reca                          
---------------------------------------------------------
- {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+                       reca                        
+---------------------------------------------------
+ {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": ["(abc,42,01.02.2003)"]}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": "{\"(abc,42,01.02.2003)\"}"}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT rec FROM json_populate_record(
@@ -1741,9 +1741,9 @@
 		row('x',3,'2012-12-31 15:30:56')::jpop,NULL)::jsrec,
 	'{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}'
 ) q;
-                rec                 
-------------------------------------
- (abc,3,"Thu Jan 02 00:00:00 2003")
+              rec              
+-------------------------------
+ (abc,3,"2003-02-01 00:00:00")
 (1 row)
 
 -- anonymous record type
@@ -1780,38 +1780,38 @@
 ERROR:  value for domain j_ordered_pair violates check constraint "j_ordered_pair_check"
 -- populate_recordset
 select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-       a       | b  |            c             
----------------+----+--------------------------
+       a       | b  |          c          
+---------------+----+---------------------
  [100,200,300] | 99 | 
- {"z":true}    |  3 | Fri Jan 20 10:42:53 2012
+ {"z":true}    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"c":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
@@ -1824,24 +1824,24 @@
 (1 row)
 
 select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-       a       | b  |            c             
----------------+----+--------------------------
+       a       | b  |          c          
+---------------+----+---------------------
  [100,200,300] | 99 | 
- {"z":true}    |  3 | Fri Jan 20 10:42:53 2012
+ {"z":true}    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 -- anonymous record type
@@ -1930,11 +1930,11 @@
 }'::json
 FROM generate_series(1, 3);
 SELECT (json_populate_record(NULL::jsrec, js)).* FROM jspoptest;
- i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |                rec                |                          reca                          
----+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+-----------------------------------+--------------------------------------------------------
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+ i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |             rec              |                       reca                        
+---+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+------------------------------+---------------------------------------------------
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (3 rows)
 
 DROP TYPE jsrec;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/jsonb.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/jsonb.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/jsonb.out	2019-09-02 18:21:49.555379953 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/jsonb.out	2019-09-05 16:28:20.339097345 -0500
@@ -2040,9 +2040,9 @@
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT * FROM jsonb_populate_record(NULL::jbpop,'{"a":"blurfl","x":43.2}') q;
@@ -2052,9 +2052,9 @@
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT * FROM jsonb_populate_record(NULL::jbpop,'{"a":[100,200,false],"x":43.2}') q;
@@ -2064,17 +2064,17 @@
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"a":[100,200,false],"x":43.2}') q;
-         a         | b |            c             
--------------------+---+--------------------------
- [100, 200, false] | 3 | Mon Dec 31 15:30:56 2012
+         a         | b |          c          
+-------------------+---+---------------------
+ [100, 200, false] | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"c":[100,200,false],"x":43.2}') q;
 ERROR:  invalid input syntax for type timestamp: "[100, 200, false]"
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop, '{}') q;
- a | b |            c             
----+---+--------------------------
- x | 3 | Mon Dec 31 15:30:56 2012
+ a | b |          c          
+---+---+---------------------
+ x | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT i FROM jsonb_populate_record(NULL::jsbrec_i_not_null, '{"x": 43.2}') q;
@@ -2391,15 +2391,15 @@
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": [1, 2]}') q;
 ERROR:  cannot call populate_composite on an array
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}') q;
-                rec                
------------------------------------
- (abc,,"Thu Jan 02 00:00:00 2003")
+             rec              
+------------------------------
+ (abc,,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": "(abc,42,01.02.2003)"}') q;
-                 rec                 
--------------------------------------
- (abc,42,"Thu Jan 02 00:00:00 2003")
+              rec               
+--------------------------------
+ (abc,42,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": 123}') q;
@@ -2408,21 +2408,21 @@
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [1, 2]}') q;
 ERROR:  cannot call populate_composite on a scalar
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q;
-                          reca                          
---------------------------------------------------------
- {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+                       reca                        
+---------------------------------------------------
+ {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": ["(abc,42,01.02.2003)"]}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": "{\"(abc,42,01.02.2003)\"}"}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT rec FROM jsonb_populate_record(
@@ -2430,9 +2430,9 @@
 		row('x',3,'2012-12-31 15:30:56')::jbpop,NULL)::jsbrec,
 	'{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}'
 ) q;
-                rec                 
-------------------------------------
- (abc,3,"Thu Jan 02 00:00:00 2003")
+              rec              
+-------------------------------
+ (abc,3,"2003-02-01 00:00:00")
 (1 row)
 
 -- anonymous record type
@@ -2469,61 +2469,61 @@
 ERROR:  value for domain jb_ordered_pair violates check constraint "jb_ordered_pair_check"
 -- populate_recordset
 SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-        a        | b  |            c             
------------------+----+--------------------------
+        a        | b  |          c          
+-----------------+----+---------------------
  [100, 200, 300] | 99 | 
- {"z": true}     |  3 | Fri Jan 20 10:42:53 2012
+ {"z": true}     |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"c":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
 ERROR:  invalid input syntax for type timestamp: "[100, 200, 300]"
 SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-        a        | b  |            c             
------------------+----+--------------------------
+        a        | b  |          c          
+-----------------+----+---------------------
  [100, 200, 300] | 99 | 
- {"z": true}     |  3 | Fri Jan 20 10:42:53 2012
+ {"z": true}     |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 -- anonymous record type
@@ -2725,11 +2725,11 @@
 }'::jsonb
 FROM generate_series(1, 3);
 SELECT (jsonb_populate_record(NULL::jsbrec, js)).* FROM jsbpoptest;
- i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |                rec                |                          reca                          
----+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+-----------------------------------+--------------------------------------------------------
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+ i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |             rec              |                       reca                        
+---+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+------------------------------+---------------------------------------------------
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (3 rows)
 
 DROP TYPE jsbrec;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/plpgsql.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/plpgsql.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/plpgsql.out	2019-08-12 14:55:05.446231980 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/plpgsql.out	2019-09-05 16:28:24.787475847 -0500
@@ -4210,33 +4210,33 @@
 select cast_invoker(20150717);
  cast_invoker 
 --------------
- 07-17-2015
+ 2015-07-17
 (1 row)
 
 select cast_invoker(20150718);  -- second call crashed in pre-release 9.5
  cast_invoker 
 --------------
- 07-18-2015
+ 2015-07-18
 (1 row)
 
 begin;
 select cast_invoker(20150717);
  cast_invoker 
 --------------
- 07-17-2015
+ 2015-07-17
 (1 row)
 
 select cast_invoker(20150718);
  cast_invoker 
 --------------
- 07-18-2015
+ 2015-07-18
 (1 row)
 
 savepoint s1;
 select cast_invoker(20150718);
  cast_invoker 
 --------------
- 07-18-2015
+ 2015-07-18
 (1 row)
 
 select cast_invoker(-1); -- fails
@@ -4247,13 +4247,13 @@
 select cast_invoker(20150719);
  cast_invoker 
 --------------
- 07-19-2015
+ 2015-07-19
 (1 row)
 
 select cast_invoker(20150720);
  cast_invoker 
 --------------
- 07-20-2015
+ 2015-07-20
 (1 row)
 
 commit;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/alter_table.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/alter_table.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/alter_table.out	2019-08-12 14:55:15.915120765 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/alter_table.out	2019-09-05 16:28:26.755643309 -0500
@@ -48,9 +48,9 @@
 	'(0,2,4.1,4.1,3.1,3.1)', '(4.1,4.1,3.1,3.1)',
 	'epoch', '01:00:10', '{1.0,2.0,3.0,4.0}', '{1.0,2.0,3.0,4.0}', '{1,2,3,4}');
 SELECT * FROM attmp;
- initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |            v             |        w         |     x     |     y     |     z     
----------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+--------------------------+------------------+-----------+-----------+-----------
-         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | Thu Jan 01 00:00:00 1970 | @ 1 hour 10 secs | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
+ initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |          v          |    w     |     x     |     y     |     z     
+---------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+---------------------+----------+-----------+-----------+-----------
+         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | 1970-01-01 00:00:00 | 01:00:10 | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
 (1 row)
 
 DROP TABLE attmp;
@@ -90,9 +90,9 @@
 	'(0,2,4.1,4.1,3.1,3.1)', '(4.1,4.1,3.1,3.1)',
 	'epoch', '01:00:10', '{1.0,2.0,3.0,4.0}', '{1.0,2.0,3.0,4.0}', '{1,2,3,4}');
 SELECT * FROM attmp;
- initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |            v             |        w         |     x     |     y     |     z     
----------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+--------------------------+------------------+-----------+-----------+-----------
-         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | Thu Jan 01 00:00:00 1970 | @ 1 hour 10 secs | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
+ initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |          v          |    w     |     x     |     y     |     z     
+---------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+---------------------+----------+-----------+-----------+-----------
+         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | 1970-01-01 00:00:00 | 01:00:10 | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
 (1 row)
 
 CREATE INDEX attmp_idx ON attmp (a, (d + e), b);
@@ -541,11 +541,11 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2011
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
 (7 rows)
 
 create table nv_child_2009 (check (d between '2009-01-01'::date and '2009-12-31'::date)) inherits (nv_parent);
@@ -554,11 +554,11 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2011
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
 (7 rows)
 
 explain (costs off) select * from nv_parent where d between '2009-08-01'::date and '2009-08-31'::date;
@@ -566,13 +566,13 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2011
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2009
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
 (9 rows)
 
 -- after validation, the constraint should be used
@@ -582,11 +582,11 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2009
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
 (7 rows)
 
 -- add an inherited NOT VALID constraint
@@ -597,8 +597,8 @@
 --------+------+-----------+----------+---------
  d      | date |           |          | 
 Check constraints:
-    "nv_child_2009_d_check" CHECK (d >= '01-01-2009'::date AND d <= '12-31-2009'::date)
-    "nv_parent_d_check" CHECK (d >= '01-01-2001'::date AND d <= '12-31-2099'::date) NOT VALID
+    "nv_child_2009_d_check" CHECK (d >= '2009-01-01'::date AND d <= '2009-12-31'::date)
+    "nv_parent_d_check" CHECK (d >= '2001-01-01'::date AND d <= '2099-12-31'::date) NOT VALID
 Inherits: nv_parent
 
 -- we leave nv_parent and children around to help test pg_dump logic
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/polymorphism.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/polymorphism.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/polymorphism.out	2019-07-12 13:20:36.225289250 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/polymorphism.out	2019-09-05 16:28:21.207171208 -0500
@@ -1027,7 +1027,7 @@
 select dfunc(to_date('20081215','YYYYMMDD'));
        dfunc       
 -------------------
- Hello, 12-15-2008
+ Hello, 2008-12-15
 (1 row)
 
 select dfunc('City'::text);
@@ -1202,31 +1202,31 @@
 select (dfunc('Hello World', 20, '2009-07-25'::date)).*;
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', 20, '2009-07-25'::date);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc(c := '2009-07-25'::date, a := 'Hello World', b := 20);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', b := 20, c := '2009-07-25'::date);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', c := '2009-07-25'::date, b := 20);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', c := 20, b := '2009-07-25'::date);  -- fail
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rowtypes.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rowtypes.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rowtypes.out	2019-08-12 14:55:05.454232660 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rowtypes.out	2019-09-05 16:28:21.183169166 -0500
@@ -95,7 +95,7 @@
 select * from people;
      fn     |     bd     
 ------------+------------
- (Joe,Blow) | 01-10-1984
+ (Joe,Blow) | 1984-01-10
 (1 row)
 
 -- at the moment this will not work due to ALTER TABLE inadequacy:
@@ -106,7 +106,7 @@
 select * from people;
      fn      |     bd     
 -------------+------------
- (Joe,Blow,) | 01-10-1984
+ (Joe,Blow,) | 1984-01-10
 (1 row)
 
 -- test insertion/updating of subfields
@@ -114,7 +114,7 @@
 select * from people;
       fn       |     bd     
 ---------------+------------
- (Joe,Blow,Jr) | 01-10-1984
+ (Joe,Blow,Jr) | 1984-01-10
 (1 row)
 
 insert into quadtable (f1, q.c1.r, q.c2.i) values(44,55,66);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/partition_prune.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/partition_prune.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/partition_prune.out	2019-08-12 14:55:15.923121444 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/partition_prune.out	2019-09-05 16:28:30.059924450 -0500
@@ -3149,12 +3149,12 @@
 -- timestamp < timestamptz comparison is only stable, not immutable
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning where a < '2000-02-01'::timestamptz;
-                                   QUERY PLAN                                   
---------------------------------------------------------------------------------
+                                QUERY PLAN                                
+--------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning1 (actual rows=0 loops=1)
-         Filter: (a < 'Tue Feb 01 00:00:00 2000 PST'::timestamp with time zone)
+         Filter: (a < '2000-02-01 00:00:00-05'::timestamp with time zone)
 (4 rows)
 
 -- check ScalarArrayOp cases
@@ -3170,43 +3170,43 @@
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2000-02-01', '2010-01-01']::timestamp[]);
-                                                   QUERY PLAN                                                   
-----------------------------------------------------------------------------------------------------------------
+                                              QUERY PLAN                                              
+------------------------------------------------------------------------------------------------------
  Seq Scan on stable_qual_pruning2 (actual rows=0 loops=1)
-   Filter: (a = ANY ('{"Tue Feb 01 00:00:00 2000","Fri Jan 01 00:00:00 2010"}'::timestamp without time zone[]))
+   Filter: (a = ANY ('{"2000-02-01 00:00:00","2010-01-01 00:00:00"}'::timestamp without time zone[]))
 (2 rows)
 
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2000-02-01', localtimestamp]::timestamp[]);
-                                                 QUERY PLAN                                                 
-------------------------------------------------------------------------------------------------------------
+                                              QUERY PLAN                                               
+-------------------------------------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning2 (actual rows=0 loops=1)
-         Filter: (a = ANY (ARRAY['Tue Feb 01 00:00:00 2000'::timestamp without time zone, LOCALTIMESTAMP]))
+         Filter: (a = ANY (ARRAY['2000-02-01 00:00:00'::timestamp without time zone, LOCALTIMESTAMP]))
 (4 rows)
 
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2010-02-01', '2020-01-01']::timestamptz[]);
-                                                        QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
+                                                  QUERY PLAN                                                   
+---------------------------------------------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning1 (never executed)
-         Filter: (a = ANY ('{"Mon Feb 01 00:00:00 2010 PST","Wed Jan 01 00:00:00 2020 PST"}'::timestamp with time zone[]))
+         Filter: (a = ANY ('{"2010-02-01 00:00:00-05","2020-01-01 00:00:00-05"}'::timestamp with time zone[]))
 (4 rows)
 
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2000-02-01', '2010-01-01']::timestamptz[]);
-                                                        QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
+                                                  QUERY PLAN                                                   
+---------------------------------------------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning2 (actual rows=0 loops=1)
-         Filter: (a = ANY ('{"Tue Feb 01 00:00:00 2000 PST","Fri Jan 01 00:00:00 2010 PST"}'::timestamp with time zone[]))
+         Filter: (a = ANY ('{"2000-02-01 00:00:00-05","2010-01-01 00:00:00-05"}'::timestamp with time zone[]))
 (4 rows)
 
 explain (analyze, costs off, summary off, timing off)
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/fast_default.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/fast_default.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/fast_default.out	2019-07-12 13:20:36.197291926 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/fast_default.out	2019-09-05 16:28:31.320031663 -0500
@@ -126,36 +126,36 @@
        c_hugetext = repeat('abcdefg',1000) as c_hugetext_origdef,
        c_hugetext = repeat('poiuyt', 1000) as c_hugetext_newdef
 FROM T ORDER BY pk;
- pk | c_int | c_bpchar | c_text |   c_date   |       c_timestamp        |     c_timestamp_null     |         c_array          | c_small | c_small_null |       c_big       |       c_num       |  c_time  | c_interval | c_hugetext_origdef | c_hugetext_newdef 
-----+-------+----------+--------+------------+--------------------------+--------------------------+--------------------------+---------+--------------+-------------------+-------------------+----------+------------+--------------------+-------------------
-  1 |     1 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  2 |     1 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  3 |     2 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  4 |     2 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  5 |     2 | dog      | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  6 |     2 | dog      | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  7 |     2 | dog      | cat    | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  8 |     2 | dog      | cat    | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  9 |     2 | dog      | cat    | 01-01-2010 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 10 |     2 | dog      | cat    | 01-01-2010 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 11 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 12 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 13 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 14 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 15 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 16 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 17 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 18 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 19 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | @ 1 day    | t                  | f
- 20 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | @ 1 day    | t                  | f
- 21 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 1 day    | t                  | f
- 22 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 1 day    | t                  | f
- 23 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 3 hours  | t                  | f
- 24 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 3 hours  | t                  | f
- 25 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
- 26 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
- 27 |     2 |          |        |            |                          | Thu Sep 29 12:00:00 2016 |                          |         |           13 |                   |                   |          |            |                    | 
- 28 |     2 |          |        |            |                          | Thu Sep 29 12:00:00 2016 |                          |         |           13 |                   |                   |          |            |                    | 
+ pk | c_int | c_bpchar | c_text |   c_date   |     c_timestamp     |  c_timestamp_null   |         c_array          | c_small | c_small_null |       c_big       |       c_num       |  c_time  | c_interval | c_hugetext_origdef | c_hugetext_newdef 
+----+-------+----------+--------+------------+---------------------+---------------------+--------------------------+---------+--------------+-------------------+-------------------+----------+------------+--------------------+-------------------
+  1 |     1 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  2 |     1 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  3 |     2 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  4 |     2 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  5 |     2 | dog      | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  6 |     2 | dog      | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  7 |     2 | dog      | cat    | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  8 |     2 | dog      | cat    | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  9 |     2 | dog      | cat    | 2010-01-01 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 10 |     2 | dog      | cat    | 2010-01-01 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 11 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 12 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 13 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 14 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 15 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 16 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 17 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 18 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 19 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | 1 day      | t                  | f
+ 20 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | 1 day      | t                  | f
+ 21 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 1 day      | t                  | f
+ 22 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 1 day      | t                  | f
+ 23 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 03:00:00   | t                  | f
+ 24 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 03:00:00   | t                  | f
+ 25 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
+ 26 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
+ 27 |     2 |          |        |            |                     | 2016-09-29 12:00:00 |                          |         |           13 |                   |                   |          |            |                    | 
+ 28 |     2 |          |        |            |                     | 2016-09-29 12:00:00 |                          |         |           13 |                   |                   |          |            |                    | 
 (28 rows)
 
 SELECT comp();
@@ -218,24 +218,24 @@
               ALTER COLUMN c_array     DROP DEFAULT;
 INSERT INTO T VALUES (15), (16);
 SELECT * FROM T;
- pk | c_int | c_bpchar |    c_text    |   c_date   |       c_timestamp        |            c_array            
-----+-------+----------+--------------+------------+--------------------------+-------------------------------
-  1 |     6 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  2 |     6 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  3 |     8 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  4 |     8 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  5 |     8 | abc      | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  6 |     8 | abc      | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  7 |     8 | abc      | abcdefghijkl | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  8 |     8 | abc      | abcdefghijkl | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  9 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
- 10 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
- 11 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,abcd,the,real,world}
- 12 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,abcd,the,real,world}
- 13 |       | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,a,fantasy}
- 14 |       | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,a,fantasy}
- 15 |       |          |              |            |                          | 
- 16 |       |          |              |            |                          | 
+ pk | c_int | c_bpchar |    c_text    |   c_date   |     c_timestamp     |            c_array            
+----+-------+----------+--------------+------------+---------------------+-------------------------------
+  1 |     6 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  2 |     6 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  3 |     8 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  4 |     8 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  5 |     8 | abc      | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  6 |     8 | abc      | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  7 |     8 | abc      | abcdefghijkl | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  8 |     8 | abc      | abcdefghijkl | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  9 |     8 | abc      | abcdefghijkl | 2009-12-28 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+ 10 |     8 | abc      | abcdefghijkl | 2009-12-28 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+ 11 |     8 | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,abcd,the,real,world}
+ 12 |     8 | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,abcd,the,real,world}
+ 13 |       | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,a,fantasy}
+ 14 |       | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,a,fantasy}
+ 15 |       |          |              |            |                     | 
+ 16 |       |          |              |            |                     | 
 (16 rows)
 
 SELECT comp();
regression.diffsapplication/octet-stream; name=regression.diffsDownload
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/text.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/text.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/text.out	2019-07-12 13:20:36.241287721 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/text.out	2019-09-05 16:22:51.067035302 -0500
@@ -63,7 +63,7 @@
 select concat(1,2,3,'hello',true, false, to_date('20100309','YYYYMMDD'));
         concat        
 ----------------------
- 123hellotf03-09-2010
+ 123hellotf2010-03-09
 (1 row)
 
 select concat_ws('#','one');
@@ -75,7 +75,7 @@
 select concat_ws('#',1,2,3,'hello',true, false, to_date('20100309','YYYYMMDD'));
          concat_ws          
 ----------------------------
- 1#2#3#hello#t#f#03-09-2010
+ 1#2#3#hello#t#f#2010-03-09
 (1 row)
 
 select concat_ws(',',10,20,null,30);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/int8.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/int8.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/int8.out	2019-08-12 14:55:05.434230962 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/int8.out	2019-09-05 16:22:51.223048624 -0500
@@ -462,20 +462,20 @@
 -----------+------------------------+------------------------
            |                    123 |                    456
            |                    123 |  4,567,890,123,456,789
-           |  4,567,890,123,456,789 |                    123
-           |  4,567,890,123,456,789 |  4,567,890,123,456,789
-           |  4,567,890,123,456,789 | -4,567,890,123,456,789
+           |  4.567.890.123.456.789 |                    123
+           |  4.567.890.123.456.789 |  4,567,890,123,456,789
+           |  4.567.890.123.456.789 | -4,567,890,123,456,789
 (5 rows)
 
 SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999')
 	FROM INT8_TBL;
  to_char_2 |            to_char             |            to_char             
 -----------+--------------------------------+--------------------------------
-           |                    123.000,000 |                    456.000,000
-           |                    123.000,000 |  4,567,890,123,456,789.000,000
-           |  4,567,890,123,456,789.000,000 |                    123.000,000
-           |  4,567,890,123,456,789.000,000 |  4,567,890,123,456,789.000,000
-           |  4,567,890,123,456,789.000,000 | -4,567,890,123,456,789.000,000
+           |                    123,000.000 |                    456.000,000
+           |                    123,000.000 |  4,567,890,123,456,789.000,000
+           |  4.567.890.123.456.789,000.000 |                    123.000,000
+           |  4.567.890.123.456.789,000.000 |  4,567,890,123,456,789.000,000
+           |  4.567.890.123.456.789,000.000 | -4,567,890,123,456,789.000,000
 (5 rows)
 
 SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR')
@@ -583,11 +583,11 @@
 SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000')  FROM INT8_TBL;
  to_char_13 |        to_char         
 ------------+------------------------
-            |                456.000
-            |   4567890123456789.000
-            |                123.000
-            |   4567890123456789.000
-            |  -4567890123456789.000
+            | $              456.000
+            | $ 4567890123456789.000
+            | $              123.000
+            | $ 4567890123456789.000
+            | $-4567890123456789.000
 (5 rows)
 
 SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rangetypes.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rangetypes.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rangetypes.out	2019-08-12 14:55:15.923121444 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rangetypes.out	2019-09-05 16:22:54.095293886 -0500
@@ -619,25 +619,25 @@
 select daterange('2000-01-10'::date, '2000-01-20'::date, '[]');
         daterange        
 -------------------------
- [01-10-2000,01-21-2000)
+ [2000-01-10,2000-01-21)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-20'::date, '[)');
         daterange        
 -------------------------
- [01-10-2000,01-20-2000)
+ [2000-01-10,2000-01-20)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-20'::date, '(]');
         daterange        
 -------------------------
- [01-11-2000,01-21-2000)
+ [2000-01-11,2000-01-21)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-20'::date, '()');
         daterange        
 -------------------------
- [01-11-2000,01-20-2000)
+ [2000-01-11,2000-01-20)
 (1 row)
 
 select daterange('2000-01-10'::date, '2000-01-11'::date, '()');
@@ -649,31 +649,31 @@
 select daterange('2000-01-10'::date, '2000-01-11'::date, '(]');
         daterange        
 -------------------------
- [01-11-2000,01-12-2000)
+ [2000-01-11,2000-01-12)
 (1 row)
 
 select daterange('-infinity'::date, '2000-01-01'::date, '()');
        daterange        
 ------------------------
- (-infinity,01-01-2000)
+ (-infinity,2000-01-01)
 (1 row)
 
 select daterange('-infinity'::date, '2000-01-01'::date, '[)');
        daterange        
 ------------------------
- [-infinity,01-01-2000)
+ [-infinity,2000-01-01)
 (1 row)
 
 select daterange('2000-01-01'::date, 'infinity'::date, '[)');
        daterange       
 -----------------------
- [01-01-2000,infinity)
+ [2000-01-01,infinity)
 (1 row)
 
 select daterange('2000-01-01'::date, 'infinity'::date, '[]');
        daterange       
 -----------------------
- [01-01-2000,infinity]
+ [2000-01-01,infinity]
 (1 row)
 
 -- test GiST index that's been built incrementally
@@ -1166,13 +1166,13 @@
 insert into test_range_excl
   values(int4range(123, 123, '[]'), int4range(3, 3, '[]'), '[2010-01-02 10:10, 2010-01-02 11:00)');
 ERROR:  conflicting key value violates exclusion constraint "test_range_excl_room_during_excl"
-DETAIL:  Key (room, during)=([123,124), ["Sat Jan 02 10:10:00 2010","Sat Jan 02 11:00:00 2010")) conflicts with existing key (room, during)=([123,124), ["Sat Jan 02 10:00:00 2010","Sat Jan 02 11:00:00 2010")).
+DETAIL:  Key (room, during)=([123,124), ["2010-01-02 10:10:00","2010-01-02 11:00:00")) conflicts with existing key (room, during)=([123,124), ["2010-01-02 10:00:00","2010-01-02 11:00:00")).
 insert into test_range_excl
   values(int4range(124, 124, '[]'), int4range(3, 3, '[]'), '[2010-01-02 10:10, 2010-01-02 11:10)');
 insert into test_range_excl
   values(int4range(125, 125, '[]'), int4range(1, 1, '[]'), '[2010-01-02 10:10, 2010-01-02 11:00)');
 ERROR:  conflicting key value violates exclusion constraint "test_range_excl_speaker_during_excl"
-DETAIL:  Key (speaker, during)=([1,2), ["Sat Jan 02 10:10:00 2010","Sat Jan 02 11:00:00 2010")) conflicts with existing key (speaker, during)=([1,2), ["Sat Jan 02 10:00:00 2010","Sat Jan 02 11:00:00 2010")).
+DETAIL:  Key (speaker, during)=([1,2), ["2010-01-02 10:10:00","2010-01-02 11:00:00")) conflicts with existing key (speaker, during)=([1,2), ["2010-01-02 10:00:00","2010-01-02 11:00:00")).
 -- test bigint ranges
 select int8range(10000000000::int8, 20000000000::int8,'(]');
          int8range         
@@ -1183,9 +1183,9 @@
 -- test tstz ranges
 set timezone to '-08';
 select '[2010-01-01 01:00:00 -05, 2010-01-01 02:00:00 -08)'::tstzrange;
-                            tstzrange                            
------------------------------------------------------------------
- ["Thu Dec 31 22:00:00 2009 -08","Fri Jan 01 02:00:00 2010 -08")
+                      tstzrange                      
+-----------------------------------------------------
+ ["2009-12-31 22:00:00-08","2010-01-01 02:00:00-08")
 (1 row)
 
 -- should fail
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/date.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/date.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/date.out	2019-08-12 14:55:05.422229943 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/date.out	2019-09-05 16:22:57.399576026 -0500
@@ -24,44 +24,44 @@
 SELECT f1 AS "Fifteen" FROM DATE_TBL;
   Fifteen   
 ------------
- 04-09-1957
- 06-13-1957
- 02-28-1996
- 02-29-1996
- 03-01-1996
- 03-02-1996
- 02-28-1997
- 03-01-1997
- 03-02-1997
- 04-01-2000
- 04-02-2000
- 04-03-2000
- 04-08-2038
- 04-09-2039
- 04-10-2040
+ 1957-04-09
+ 1957-06-13
+ 1996-02-28
+ 1996-02-29
+ 1996-03-01
+ 1996-03-02
+ 1997-02-28
+ 1997-03-01
+ 1997-03-02
+ 2000-04-01
+ 2000-04-02
+ 2000-04-03
+ 2038-04-08
+ 2039-04-09
+ 2040-04-10
 (15 rows)
 
 SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
     Nine    
 ------------
- 04-09-1957
- 06-13-1957
- 02-28-1996
- 02-29-1996
- 03-01-1996
- 03-02-1996
- 02-28-1997
- 03-01-1997
- 03-02-1997
+ 1957-04-09
+ 1957-06-13
+ 1996-02-28
+ 1996-02-29
+ 1996-03-01
+ 1996-03-02
+ 1997-02-28
+ 1997-03-01
+ 1997-03-02
 (9 rows)
 
 SELECT f1 AS "Three" FROM DATE_TBL
   WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
    Three    
 ------------
- 04-01-2000
- 04-02-2000
- 04-03-2000
+ 2000-04-01
+ 2000-04-02
+ 2000-04-03
 (3 rows)
 
 --
@@ -1140,63 +1140,63 @@
 -- test trunc function!
 --
 SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
-        date_trunc        
---------------------------
- Thu Jan 01 00:00:00 1001
+     date_trunc      
+---------------------
+ 1001-01-01 00:00:00
 (1 row)
 
 SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
           date_trunc          
 ------------------------------
- Thu Jan 01 00:00:00 1001 PST
+ 1001-01-01 00:00:00-05:19:20
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
-        date_trunc        
---------------------------
- Tue Jan 01 00:00:00 1901
+     date_trunc      
+---------------------
+ 1901-01-01 00:00:00
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
-          date_trunc          
-------------------------------
- Tue Jan 01 00:00:00 1901 PST
+        date_trunc         
+---------------------------
+ 1901-01-01 00:00:00-05:14
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
-          date_trunc          
-------------------------------
- Mon Jan 01 00:00:00 2001 PST
+       date_trunc       
+------------------------
+ 2001-01-01 00:00:00-05
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
           date_trunc          
 ------------------------------
- Mon Jan 01 00:00:00 0001 PST
+ 0001-01-01 00:00:00-05:19:20
 (1 row)
 
 SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
            date_trunc            
 ---------------------------------
- Tue Jan 01 00:00:00 0100 PST BC
+ 0100-01-01 00:00:00-05:19:20 BC
 (1 row)
 
 SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
-          date_trunc          
-------------------------------
- Mon Jan 01 00:00:00 1990 PST
+       date_trunc       
+------------------------
+ 1990-01-01 00:00:00-05
 (1 row)
 
 SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
            date_trunc            
 ---------------------------------
- Sat Jan 01 00:00:00 0001 PST BC
+ 0001-01-01 00:00:00-05:19:20 BC
 (1 row)
 
 SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
            date_trunc            
 ---------------------------------
- Mon Jan 01 00:00:00 0011 PST BC
+ 0011-01-01 00:00:00-05:19:20 BC
 (1 row)
 
 --
@@ -1448,13 +1448,13 @@
 select make_date(2013, 7, 15);
  make_date  
 ------------
- 07-15-2013
+ 2013-07-15
 (1 row)
 
 select make_date(-44, 3, 15);
    make_date   
 ---------------
- 03-15-0044 BC
+ 0044-03-15 BC
 (1 row)
 
 select make_time(8, 20, 0.0);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamp.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamp.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamp.out	2019-08-12 14:55:05.458232999 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamp.out	2019-09-05 16:22:58.027629652 -0500
@@ -168,80 +168,80 @@
 LINE 1: INSERT INTO TIMESTAMP_TBL VALUES ('Feb 16 17:32:01 5097 BC')...
                                           ^
 SELECT '' AS "64", d1 FROM TIMESTAMP_TBL;
- 64 |             d1              
-----+-----------------------------
+ 64 |           d1           
+----+------------------------
     | -infinity
     | infinity
-    | Thu Jan 01 00:00:00 1970
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 00:00:00 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1970-01-01 00:00:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 00:00:00
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (65 rows)
 
 -- Check behavior at the lower boundary of the timestamp range
 SELECT '4714-11-24 00:00:00 BC'::timestamp;
-          timestamp          
------------------------------
- Mon Nov 24 00:00:00 4714 BC
+       timestamp        
+------------------------
+ 4714-11-24 00:00:00 BC
 (1 row)
 
 SELECT '4714-11-23 23:59:59 BC'::timestamp;  -- out of range
@@ -252,300 +252,300 @@
 -- Demonstrate functions and operators
 SELECT '' AS "48", d1 FROM TIMESTAMP_TBL
    WHERE d1 > timestamp without time zone '1997-01-02';
- 48 |             d1             
-----+----------------------------
+ 48 |          d1           
+----+-----------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (49 rows)
 
 SELECT '' AS "15", d1 FROM TIMESTAMP_TBL
    WHERE d1 < timestamp without time zone '1997-01-02';
- 15 |             d1              
-----+-----------------------------
+ 15 |           d1           
+----+------------------------
     | -infinity
-    | Thu Jan 01 00:00:00 1970
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
+    | 1970-01-01 00:00:00
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
 (15 rows)
 
 SELECT '' AS one, d1 FROM TIMESTAMP_TBL
    WHERE d1 = timestamp without time zone '1997-01-02';
- one |            d1            
------+--------------------------
-     | Thu Jan 02 00:00:00 1997
+ one |         d1          
+-----+---------------------
+     | 1997-01-02 00:00:00
 (1 row)
 
 SELECT '' AS "63", d1 FROM TIMESTAMP_TBL
    WHERE d1 != timestamp without time zone '1997-01-02';
- 63 |             d1              
-----+-----------------------------
+ 63 |           d1           
+----+------------------------
     | -infinity
     | infinity
-    | Thu Jan 01 00:00:00 1970
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1970-01-01 00:00:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (64 rows)
 
 SELECT '' AS "16", d1 FROM TIMESTAMP_TBL
    WHERE d1 <= timestamp without time zone '1997-01-02';
- 16 |             d1              
-----+-----------------------------
+ 16 |           d1           
+----+------------------------
     | -infinity
-    | Thu Jan 01 00:00:00 1970
-    | Thu Jan 02 00:00:00 1997
-    | Tue Feb 16 17:32:01 0097 BC
-    | Sat Feb 16 17:32:01 0097
-    | Thu Feb 16 17:32:01 0597
-    | Tue Feb 16 17:32:01 1097
-    | Sat Feb 16 17:32:01 1697
-    | Thu Feb 16 17:32:01 1797
-    | Tue Feb 16 17:32:01 1897
-    | Wed Feb 28 17:32:01 1996
-    | Thu Feb 29 17:32:01 1996
-    | Fri Mar 01 17:32:01 1996
-    | Mon Dec 30 17:32:01 1996
-    | Tue Dec 31 17:32:01 1996
-    | Wed Jan 01 17:32:01 1997
+    | 1970-01-01 00:00:00
+    | 1997-01-02 00:00:00
+    | 0097-02-16 17:32:01 BC
+    | 0097-02-16 17:32:01
+    | 0597-02-16 17:32:01
+    | 1097-02-16 17:32:01
+    | 1697-02-16 17:32:01
+    | 1797-02-16 17:32:01
+    | 1897-02-16 17:32:01
+    | 1996-02-28 17:32:01
+    | 1996-02-29 17:32:01
+    | 1996-03-01 17:32:01
+    | 1996-12-30 17:32:01
+    | 1996-12-31 17:32:01
+    | 1997-01-01 17:32:01
 (16 rows)
 
 SELECT '' AS "49", d1 FROM TIMESTAMP_TBL
    WHERE d1 >= timestamp without time zone '1997-01-02';
- 49 |             d1             
-----+----------------------------
+ 49 |          d1           
+----+-----------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:02 1997
-    | Mon Feb 10 17:32:01.4 1997
-    | Mon Feb 10 17:32:01.5 1997
-    | Mon Feb 10 17:32:01.6 1997
-    | Thu Jan 02 00:00:00 1997
-    | Thu Jan 02 03:04:05 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 17:32:01 1997
-    | Sat Sep 22 18:19:20 2001
-    | Wed Mar 15 08:14:01 2000
-    | Wed Mar 15 13:14:02 2000
-    | Wed Mar 15 12:14:03 2000
-    | Wed Mar 15 03:14:04 2000
-    | Wed Mar 15 02:14:05 2000
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:00 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Jun 10 18:32:01 1997
-    | Mon Feb 10 17:32:01 1997
-    | Tue Feb 11 17:32:01 1997
-    | Wed Feb 12 17:32:01 1997
-    | Thu Feb 13 17:32:01 1997
-    | Fri Feb 14 17:32:01 1997
-    | Sat Feb 15 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sun Feb 16 17:32:01 1997
-    | Sat Feb 16 17:32:01 2097
-    | Fri Feb 28 17:32:01 1997
-    | Sat Mar 01 17:32:01 1997
-    | Tue Dec 30 17:32:01 1997
-    | Wed Dec 31 17:32:01 1997
-    | Fri Dec 31 17:32:01 1999
-    | Sat Jan 01 17:32:01 2000
-    | Sun Dec 31 17:32:01 2000
-    | Mon Jan 01 17:32:01 2001
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:02
+    | 1997-02-10 17:32:01.4
+    | 1997-02-10 17:32:01.5
+    | 1997-02-10 17:32:01.6
+    | 1997-01-02 00:00:00
+    | 1997-01-02 03:04:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 17:32:01
+    | 2001-09-22 18:19:20
+    | 2000-03-15 08:14:01
+    | 2000-03-15 13:14:02
+    | 2000-03-15 12:14:03
+    | 2000-03-15 03:14:04
+    | 2000-03-15 02:14:05
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:00
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-10 17:32:01
+    | 1997-06-10 18:32:01
+    | 1997-02-10 17:32:01
+    | 1997-02-11 17:32:01
+    | 1997-02-12 17:32:01
+    | 1997-02-13 17:32:01
+    | 1997-02-14 17:32:01
+    | 1997-02-15 17:32:01
+    | 1997-02-16 17:32:01
+    | 1997-02-16 17:32:01
+    | 2097-02-16 17:32:01
+    | 1997-02-28 17:32:01
+    | 1997-03-01 17:32:01
+    | 1997-12-30 17:32:01
+    | 1997-12-31 17:32:01
+    | 1999-12-31 17:32:01
+    | 2000-01-01 17:32:01
+    | 2000-12-31 17:32:01
+    | 2001-01-01 17:32:01
 (50 rows)
 
 SELECT '' AS "54", d1 - timestamp without time zone '1997-01-02' AS diff
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 1724 days 18 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 13 hours 14 mins 2 secs
-    | @ 1168 days 12 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 2 hours 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 18 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |        diff         
+----+---------------------
+    | -9863 days
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:02
+    | 39 days 17:32:01.4
+    | 39 days 17:32:01.5
+    | 39 days 17:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 17:32:01
+    | 1724 days 18:19:20
+    | 1168 days 08:14:01
+    | 1168 days 13:14:02
+    | 1168 days 12:14:03
+    | 1168 days 03:14:04
+    | 1168 days 02:14:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 273 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 18:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (55 rows)
 
 SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp '2004-02-29 15:44:17.71393' ) AS week_trunc;
- date_trunc_week |        week_trunc        
------------------+--------------------------
-                 | Mon Feb 23 00:00:00 2004
+ date_trunc_week |     week_trunc      
+-----------------+---------------------
+                 | 2004-02-23 00:00:00
 (1 row)
 
 -- Test casting within a BETWEEN qualifier
@@ -553,63 +553,63 @@
   FROM TIMESTAMP_TBL
   WHERE d1 BETWEEN timestamp without time zone '1902-01-01'
    AND timestamp without time zone '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 1724 days 18 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 13 hours 14 mins 2 secs
-    | @ 1168 days 12 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 2 hours 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 18 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |        diff         
+----+---------------------
+    | -9863 days
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:02
+    | 39 days 17:32:01.4
+    | 39 days 17:32:01.5
+    | 39 days 17:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 17:32:01
+    | 1724 days 18:19:20
+    | 1168 days 08:14:01
+    | 1168 days 13:14:02
+    | 1168 days 12:14:03
+    | 1168 days 03:14:04
+    | 1168 days 02:14:05
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 273 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:01
+    | 159 days 18:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (55 rows)
 
 SELECT '' AS "54", d1 as "timestamp",
@@ -617,189 +617,189 @@
    date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
    date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |         timestamp          | year | month | day | hour | minute | second 
-----+----------------------------+------+-------+-----+------+--------+--------
-    | Thu Jan 01 00:00:00 1970   | 1970 |     1 |   1 |    0 |      0 |      0
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:02 1997   | 1997 |     2 |  10 |   17 |     32 |      2
-    | Mon Feb 10 17:32:01.4 1997 | 1997 |     2 |  10 |   17 |     32 |    1.4
-    | Mon Feb 10 17:32:01.5 1997 | 1997 |     2 |  10 |   17 |     32 |    1.5
-    | Mon Feb 10 17:32:01.6 1997 | 1997 |     2 |  10 |   17 |     32 |    1.6
-    | Thu Jan 02 00:00:00 1997   | 1997 |     1 |   2 |    0 |      0 |      0
-    | Thu Jan 02 03:04:05 1997   | 1997 |     1 |   2 |    3 |      4 |      5
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Jun 10 17:32:01 1997   | 1997 |     6 |  10 |   17 |     32 |      1
-    | Sat Sep 22 18:19:20 2001   | 2001 |     9 |  22 |   18 |     19 |     20
-    | Wed Mar 15 08:14:01 2000   | 2000 |     3 |  15 |    8 |     14 |      1
-    | Wed Mar 15 13:14:02 2000   | 2000 |     3 |  15 |   13 |     14 |      2
-    | Wed Mar 15 12:14:03 2000   | 2000 |     3 |  15 |   12 |     14 |      3
-    | Wed Mar 15 03:14:04 2000   | 2000 |     3 |  15 |    3 |     14 |      4
-    | Wed Mar 15 02:14:05 2000   | 2000 |     3 |  15 |    2 |     14 |      5
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:00 1997   | 1997 |     2 |  10 |   17 |     32 |      0
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Jun 10 18:32:01 1997   | 1997 |     6 |  10 |   18 |     32 |      1
-    | Mon Feb 10 17:32:01 1997   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Feb 11 17:32:01 1997   | 1997 |     2 |  11 |   17 |     32 |      1
-    | Wed Feb 12 17:32:01 1997   | 1997 |     2 |  12 |   17 |     32 |      1
-    | Thu Feb 13 17:32:01 1997   | 1997 |     2 |  13 |   17 |     32 |      1
-    | Fri Feb 14 17:32:01 1997   | 1997 |     2 |  14 |   17 |     32 |      1
-    | Sat Feb 15 17:32:01 1997   | 1997 |     2 |  15 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Wed Feb 28 17:32:01 1996   | 1996 |     2 |  28 |   17 |     32 |      1
-    | Thu Feb 29 17:32:01 1996   | 1996 |     2 |  29 |   17 |     32 |      1
-    | Fri Mar 01 17:32:01 1996   | 1996 |     3 |   1 |   17 |     32 |      1
-    | Mon Dec 30 17:32:01 1996   | 1996 |    12 |  30 |   17 |     32 |      1
-    | Tue Dec 31 17:32:01 1996   | 1996 |    12 |  31 |   17 |     32 |      1
-    | Wed Jan 01 17:32:01 1997   | 1997 |     1 |   1 |   17 |     32 |      1
-    | Fri Feb 28 17:32:01 1997   | 1997 |     2 |  28 |   17 |     32 |      1
-    | Sat Mar 01 17:32:01 1997   | 1997 |     3 |   1 |   17 |     32 |      1
-    | Tue Dec 30 17:32:01 1997   | 1997 |    12 |  30 |   17 |     32 |      1
-    | Wed Dec 31 17:32:01 1997   | 1997 |    12 |  31 |   17 |     32 |      1
-    | Fri Dec 31 17:32:01 1999   | 1999 |    12 |  31 |   17 |     32 |      1
-    | Sat Jan 01 17:32:01 2000   | 2000 |     1 |   1 |   17 |     32 |      1
-    | Sun Dec 31 17:32:01 2000   | 2000 |    12 |  31 |   17 |     32 |      1
-    | Mon Jan 01 17:32:01 2001   | 2001 |     1 |   1 |   17 |     32 |      1
+ 54 |       timestamp       | year | month | day | hour | minute | second 
+----+-----------------------+------+-------+-----+------+--------+--------
+    | 1970-01-01 00:00:00   | 1970 |     1 |   1 |    0 |      0 |      0
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:02   | 1997 |     2 |  10 |   17 |     32 |      2
+    | 1997-02-10 17:32:01.4 | 1997 |     2 |  10 |   17 |     32 |    1.4
+    | 1997-02-10 17:32:01.5 | 1997 |     2 |  10 |   17 |     32 |    1.5
+    | 1997-02-10 17:32:01.6 | 1997 |     2 |  10 |   17 |     32 |    1.6
+    | 1997-01-02 00:00:00   | 1997 |     1 |   2 |    0 |      0 |      0
+    | 1997-01-02 03:04:05   | 1997 |     1 |   2 |    3 |      4 |      5
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-06-10 17:32:01   | 1997 |     6 |  10 |   17 |     32 |      1
+    | 2001-09-22 18:19:20   | 2001 |     9 |  22 |   18 |     19 |     20
+    | 2000-03-15 08:14:01   | 2000 |     3 |  15 |    8 |     14 |      1
+    | 2000-03-15 13:14:02   | 2000 |     3 |  15 |   13 |     14 |      2
+    | 2000-03-15 12:14:03   | 2000 |     3 |  15 |   12 |     14 |      3
+    | 2000-03-15 03:14:04   | 2000 |     3 |  15 |    3 |     14 |      4
+    | 2000-03-15 02:14:05   | 2000 |     3 |  15 |    2 |     14 |      5
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:00   | 1997 |     2 |  10 |   17 |     32 |      0
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-10-02 17:32:01   | 1997 |    10 |   2 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-06-10 18:32:01   | 1997 |     6 |  10 |   18 |     32 |      1
+    | 1997-02-10 17:32:01   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-11 17:32:01   | 1997 |     2 |  11 |   17 |     32 |      1
+    | 1997-02-12 17:32:01   | 1997 |     2 |  12 |   17 |     32 |      1
+    | 1997-02-13 17:32:01   | 1997 |     2 |  13 |   17 |     32 |      1
+    | 1997-02-14 17:32:01   | 1997 |     2 |  14 |   17 |     32 |      1
+    | 1997-02-15 17:32:01   | 1997 |     2 |  15 |   17 |     32 |      1
+    | 1997-02-16 17:32:01   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1997-02-16 17:32:01   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1996-02-28 17:32:01   | 1996 |     2 |  28 |   17 |     32 |      1
+    | 1996-02-29 17:32:01   | 1996 |     2 |  29 |   17 |     32 |      1
+    | 1996-03-01 17:32:01   | 1996 |     3 |   1 |   17 |     32 |      1
+    | 1996-12-30 17:32:01   | 1996 |    12 |  30 |   17 |     32 |      1
+    | 1996-12-31 17:32:01   | 1996 |    12 |  31 |   17 |     32 |      1
+    | 1997-01-01 17:32:01   | 1997 |     1 |   1 |   17 |     32 |      1
+    | 1997-02-28 17:32:01   | 1997 |     2 |  28 |   17 |     32 |      1
+    | 1997-03-01 17:32:01   | 1997 |     3 |   1 |   17 |     32 |      1
+    | 1997-12-30 17:32:01   | 1997 |    12 |  30 |   17 |     32 |      1
+    | 1997-12-31 17:32:01   | 1997 |    12 |  31 |   17 |     32 |      1
+    | 1999-12-31 17:32:01   | 1999 |    12 |  31 |   17 |     32 |      1
+    | 2000-01-01 17:32:01   | 2000 |     1 |   1 |   17 |     32 |      1
+    | 2000-12-31 17:32:01   | 2000 |    12 |  31 |   17 |     32 |      1
+    | 2001-01-01 17:32:01   | 2001 |     1 |   1 |   17 |     32 |      1
 (55 rows)
 
 SELECT '' AS "54", d1 as "timestamp",
    date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
    date_part( 'usec', d1) AS usec
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |         timestamp          | quarter | msec  |   usec   
-----+----------------------------+---------+-------+----------
-    | Thu Jan 01 00:00:00 1970   |       1 |     0 |        0
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:02 1997   |       1 |  2000 |  2000000
-    | Mon Feb 10 17:32:01.4 1997 |       1 |  1400 |  1400000
-    | Mon Feb 10 17:32:01.5 1997 |       1 |  1500 |  1500000
-    | Mon Feb 10 17:32:01.6 1997 |       1 |  1600 |  1600000
-    | Thu Jan 02 00:00:00 1997   |       1 |     0 |        0
-    | Thu Jan 02 03:04:05 1997   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Jun 10 17:32:01 1997   |       2 |  1000 |  1000000
-    | Sat Sep 22 18:19:20 2001   |       3 | 20000 | 20000000
-    | Wed Mar 15 08:14:01 2000   |       1 |  1000 |  1000000
-    | Wed Mar 15 13:14:02 2000   |       1 |  2000 |  2000000
-    | Wed Mar 15 12:14:03 2000   |       1 |  3000 |  3000000
-    | Wed Mar 15 03:14:04 2000   |       1 |  4000 |  4000000
-    | Wed Mar 15 02:14:05 2000   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:00 1997   |       1 |     0 |        0
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Jun 10 18:32:01 1997   |       2 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Feb 11 17:32:01 1997   |       1 |  1000 |  1000000
-    | Wed Feb 12 17:32:01 1997   |       1 |  1000 |  1000000
-    | Thu Feb 13 17:32:01 1997   |       1 |  1000 |  1000000
-    | Fri Feb 14 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sat Feb 15 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997   |       1 |  1000 |  1000000
-    | Wed Feb 28 17:32:01 1996   |       1 |  1000 |  1000000
-    | Thu Feb 29 17:32:01 1996   |       1 |  1000 |  1000000
-    | Fri Mar 01 17:32:01 1996   |       1 |  1000 |  1000000
-    | Mon Dec 30 17:32:01 1996   |       4 |  1000 |  1000000
-    | Tue Dec 31 17:32:01 1996   |       4 |  1000 |  1000000
-    | Wed Jan 01 17:32:01 1997   |       1 |  1000 |  1000000
-    | Fri Feb 28 17:32:01 1997   |       1 |  1000 |  1000000
-    | Sat Mar 01 17:32:01 1997   |       1 |  1000 |  1000000
-    | Tue Dec 30 17:32:01 1997   |       4 |  1000 |  1000000
-    | Wed Dec 31 17:32:01 1997   |       4 |  1000 |  1000000
-    | Fri Dec 31 17:32:01 1999   |       4 |  1000 |  1000000
-    | Sat Jan 01 17:32:01 2000   |       1 |  1000 |  1000000
-    | Sun Dec 31 17:32:01 2000   |       4 |  1000 |  1000000
-    | Mon Jan 01 17:32:01 2001   |       1 |  1000 |  1000000
+ 54 |       timestamp       | quarter | msec  |   usec   
+----+-----------------------+---------+-------+----------
+    | 1970-01-01 00:00:00   |       1 |     0 |        0
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:02   |       1 |  2000 |  2000000
+    | 1997-02-10 17:32:01.4 |       1 |  1400 |  1400000
+    | 1997-02-10 17:32:01.5 |       1 |  1500 |  1500000
+    | 1997-02-10 17:32:01.6 |       1 |  1600 |  1600000
+    | 1997-01-02 00:00:00   |       1 |     0 |        0
+    | 1997-01-02 03:04:05   |       1 |  5000 |  5000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-06-10 17:32:01   |       2 |  1000 |  1000000
+    | 2001-09-22 18:19:20   |       3 | 20000 | 20000000
+    | 2000-03-15 08:14:01   |       1 |  1000 |  1000000
+    | 2000-03-15 13:14:02   |       1 |  2000 |  2000000
+    | 2000-03-15 12:14:03   |       1 |  3000 |  3000000
+    | 2000-03-15 03:14:04   |       1 |  4000 |  4000000
+    | 2000-03-15 02:14:05   |       1 |  5000 |  5000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:00   |       1 |     0 |        0
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-10-02 17:32:01   |       4 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-06-10 18:32:01   |       2 |  1000 |  1000000
+    | 1997-02-10 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-11 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-12 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-13 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-14 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-15 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01   |       1 |  1000 |  1000000
+    | 1996-02-28 17:32:01   |       1 |  1000 |  1000000
+    | 1996-02-29 17:32:01   |       1 |  1000 |  1000000
+    | 1996-03-01 17:32:01   |       1 |  1000 |  1000000
+    | 1996-12-30 17:32:01   |       4 |  1000 |  1000000
+    | 1996-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 1997-01-01 17:32:01   |       1 |  1000 |  1000000
+    | 1997-02-28 17:32:01   |       1 |  1000 |  1000000
+    | 1997-03-01 17:32:01   |       1 |  1000 |  1000000
+    | 1997-12-30 17:32:01   |       4 |  1000 |  1000000
+    | 1997-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 1999-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 2000-01-01 17:32:01   |       1 |  1000 |  1000000
+    | 2000-12-31 17:32:01   |       4 |  1000 |  1000000
+    | 2001-01-01 17:32:01   |       1 |  1000 |  1000000
 (55 rows)
 
 SELECT '' AS "54", d1 as "timestamp",
    date_part( 'isoyear', d1) AS isoyear, date_part( 'week', d1) AS week,
    date_part( 'dow', d1) AS dow
    FROM TIMESTAMP_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |         timestamp          | isoyear | week | dow 
-----+----------------------------+---------+------+-----
-    | Thu Jan 01 00:00:00 1970   |    1970 |    1 |   4
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:02 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.4 1997 |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.5 1997 |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.6 1997 |    1997 |    7 |   1
-    | Thu Jan 02 00:00:00 1997   |    1997 |    1 |   4
-    | Thu Jan 02 03:04:05 1997   |    1997 |    1 |   4
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Tue Jun 10 17:32:01 1997   |    1997 |   24 |   2
-    | Sat Sep 22 18:19:20 2001   |    2001 |   38 |   6
-    | Wed Mar 15 08:14:01 2000   |    2000 |   11 |   3
-    | Wed Mar 15 13:14:02 2000   |    2000 |   11 |   3
-    | Wed Mar 15 12:14:03 2000   |    2000 |   11 |   3
-    | Wed Mar 15 03:14:04 2000   |    2000 |   11 |   3
-    | Wed Mar 15 02:14:05 2000   |    2000 |   11 |   3
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:00 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Tue Jun 10 18:32:01 1997   |    1997 |   24 |   2
-    | Mon Feb 10 17:32:01 1997   |    1997 |    7 |   1
-    | Tue Feb 11 17:32:01 1997   |    1997 |    7 |   2
-    | Wed Feb 12 17:32:01 1997   |    1997 |    7 |   3
-    | Thu Feb 13 17:32:01 1997   |    1997 |    7 |   4
-    | Fri Feb 14 17:32:01 1997   |    1997 |    7 |   5
-    | Sat Feb 15 17:32:01 1997   |    1997 |    7 |   6
-    | Sun Feb 16 17:32:01 1997   |    1997 |    7 |   0
-    | Sun Feb 16 17:32:01 1997   |    1997 |    7 |   0
-    | Wed Feb 28 17:32:01 1996   |    1996 |    9 |   3
-    | Thu Feb 29 17:32:01 1996   |    1996 |    9 |   4
-    | Fri Mar 01 17:32:01 1996   |    1996 |    9 |   5
-    | Mon Dec 30 17:32:01 1996   |    1997 |    1 |   1
-    | Tue Dec 31 17:32:01 1996   |    1997 |    1 |   2
-    | Wed Jan 01 17:32:01 1997   |    1997 |    1 |   3
-    | Fri Feb 28 17:32:01 1997   |    1997 |    9 |   5
-    | Sat Mar 01 17:32:01 1997   |    1997 |    9 |   6
-    | Tue Dec 30 17:32:01 1997   |    1998 |    1 |   2
-    | Wed Dec 31 17:32:01 1997   |    1998 |    1 |   3
-    | Fri Dec 31 17:32:01 1999   |    1999 |   52 |   5
-    | Sat Jan 01 17:32:01 2000   |    1999 |   52 |   6
-    | Sun Dec 31 17:32:01 2000   |    2000 |   52 |   0
-    | Mon Jan 01 17:32:01 2001   |    2001 |    1 |   1
+ 54 |       timestamp       | isoyear | week | dow 
+----+-----------------------+---------+------+-----
+    | 1970-01-01 00:00:00   |    1970 |    1 |   4
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:02   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01.4 |    1997 |    7 |   1
+    | 1997-02-10 17:32:01.5 |    1997 |    7 |   1
+    | 1997-02-10 17:32:01.6 |    1997 |    7 |   1
+    | 1997-01-02 00:00:00   |    1997 |    1 |   4
+    | 1997-01-02 03:04:05   |    1997 |    1 |   4
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-06-10 17:32:01   |    1997 |   24 |   2
+    | 2001-09-22 18:19:20   |    2001 |   38 |   6
+    | 2000-03-15 08:14:01   |    2000 |   11 |   3
+    | 2000-03-15 13:14:02   |    2000 |   11 |   3
+    | 2000-03-15 12:14:03   |    2000 |   11 |   3
+    | 2000-03-15 03:14:04   |    2000 |   11 |   3
+    | 2000-03-15 02:14:05   |    2000 |   11 |   3
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:00   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-10-02 17:32:01   |    1997 |   40 |   4
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-06-10 18:32:01   |    1997 |   24 |   2
+    | 1997-02-10 17:32:01   |    1997 |    7 |   1
+    | 1997-02-11 17:32:01   |    1997 |    7 |   2
+    | 1997-02-12 17:32:01   |    1997 |    7 |   3
+    | 1997-02-13 17:32:01   |    1997 |    7 |   4
+    | 1997-02-14 17:32:01   |    1997 |    7 |   5
+    | 1997-02-15 17:32:01   |    1997 |    7 |   6
+    | 1997-02-16 17:32:01   |    1997 |    7 |   0
+    | 1997-02-16 17:32:01   |    1997 |    7 |   0
+    | 1996-02-28 17:32:01   |    1996 |    9 |   3
+    | 1996-02-29 17:32:01   |    1996 |    9 |   4
+    | 1996-03-01 17:32:01   |    1996 |    9 |   5
+    | 1996-12-30 17:32:01   |    1997 |    1 |   1
+    | 1996-12-31 17:32:01   |    1997 |    1 |   2
+    | 1997-01-01 17:32:01   |    1997 |    1 |   3
+    | 1997-02-28 17:32:01   |    1997 |    9 |   5
+    | 1997-03-01 17:32:01   |    1997 |    9 |   6
+    | 1997-12-30 17:32:01   |    1998 |    1 |   2
+    | 1997-12-31 17:32:01   |    1998 |    1 |   3
+    | 1999-12-31 17:32:01   |    1999 |   52 |   5
+    | 2000-01-01 17:32:01   |    1999 |   52 |   6
+    | 2000-12-31 17:32:01   |    2000 |   52 |   0
+    | 2001-01-01 17:32:01   |    2001 |    1 |   1
 (55 rows)
 
 -- TO_CHAR()
@@ -835,7 +835,7 @@
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
-           | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
+           | THURSDAY  Thursday  thursday  THU Thu thu OCTOBER   October   october   X    OCT Oct oct
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
@@ -906,7 +906,7 @@
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
-           | MONDAY Monday monday FEBRUARY February february II
+           | THURSDAY Thursday thursday OCTOBER October october X
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
@@ -977,7 +977,7 @@
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 02 5 2450724
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
@@ -1048,7 +1048,7 @@
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 2 5 2450724
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
@@ -1332,7 +1332,7 @@
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
-           | 1997TH 1997th 2450490th
+           | 1997TH 1997th 2450724th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
@@ -1474,7 +1474,7 @@
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
-            | 1997 997 97 7 07 043 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
@@ -1545,7 +1545,7 @@
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
-            | 1997 997 97 7 7 43 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
@@ -1586,8 +1586,8 @@
 
 -- timestamp numeric fields constructor
 SELECT make_timestamp(2014,12,28,6,30,45.887);
-        make_timestamp        
-------------------------------
- Sun Dec 28 06:30:45.887 2014
+     make_timestamp      
+-------------------------
+ 2014-12-28 06:30:45.887
 (1 row)
 
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamptz.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamptz.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/timestamptz.out	2019-08-12 14:55:05.462233339 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/timestamptz.out	2019-09-05 16:22:58.623680545 -0500
@@ -33,7 +33,7 @@
 SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'tomorrow';
  one 
 -----
-   1
+   2
 (1 row)
 
 SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'yesterday';
@@ -118,16 +118,16 @@
 -- timestamps at different timezones
 INSERT INTO TIMESTAMPTZ_TBL VALUES ('19970210 173201 America/New_York');
 SELECT '19970210 173201' AT TIME ZONE 'America/New_York';
-         timezone         
---------------------------
- Mon Feb 10 20:32:01 1997
+      timezone       
+---------------------
+ 1997-02-10 17:32:01
 (1 row)
 
 INSERT INTO TIMESTAMPTZ_TBL VALUES ('19970710 173201 America/New_York');
 SELECT '19970710 173201' AT TIME ZONE 'America/New_York';
-         timezone         
---------------------------
- Thu Jul 10 20:32:01 1997
+      timezone       
+---------------------
+ 1997-07-10 18:32:01
 (1 row)
 
 INSERT INTO TIMESTAMPTZ_TBL VALUES ('19970710 173201 America/Does_not_exist');
@@ -138,27 +138,27 @@
 ERROR:  time zone "America/Does_not_exist" not recognized
 -- Daylight saving time for timestamps beyond 32-bit time_t range.
 SELECT '20500710 173201 Europe/Helsinki'::timestamptz; -- DST
-         timestamptz          
-------------------------------
- Sun Jul 10 07:32:01 2050 PDT
+      timestamptz       
+------------------------
+ 2050-07-10 09:32:01-05
 (1 row)
 
 SELECT '20500110 173201 Europe/Helsinki'::timestamptz; -- non-DST
-         timestamptz          
-------------------------------
- Mon Jan 10 07:32:01 2050 PST
+      timestamptz       
+------------------------
+ 2050-01-10 10:32:01-05
 (1 row)
 
 SELECT '205000-07-10 17:32:01 Europe/Helsinki'::timestamptz; -- DST
-          timestamptz           
---------------------------------
- Thu Jul 10 07:32:01 205000 PDT
+       timestamptz        
+--------------------------
+ 205000-07-10 09:32:01-05
 (1 row)
 
 SELECT '205000-01-10 17:32:01 Europe/Helsinki'::timestamptz; -- non-DST
-          timestamptz           
---------------------------------
- Fri Jan 10 07:32:01 205000 PST
+       timestamptz        
+--------------------------
+ 205000-01-10 10:32:01-05
 (1 row)
 
 -- Check date conversion and date arithmetic
@@ -209,33 +209,33 @@
 -- Alternative field order that we've historically supported (sort of)
 -- with regular and POSIXy timezone specs
 SELECT 'Wed Jul 11 10:51:14 America/New_York 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 07:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 09:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 GMT-4 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Tue Jul 10 23:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 01:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 GMT+4 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 07:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 09:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 PST-03:00 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 00:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 02:51:14-05
 (1 row)
 
 SELECT 'Wed Jul 11 10:51:14 PST+03:00 2001'::timestamptz;
-         timestamptz          
-------------------------------
- Wed Jul 11 06:51:14 2001 PDT
+      timestamptz       
+------------------------
+ 2001-07-11 08:51:14-05
 (1 row)
 
 SELECT '' AS "64", d1 FROM TIMESTAMPTZ_TBL;
@@ -243,89 +243,89 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 00:00:00 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1969-12-31 19:00:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 00:00:00-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (66 rows)
 
 -- Check behavior at the lower boundary of the timestamp range
 SELECT '4714-11-24 00:00:00+00 BC'::timestamptz;
            timestamptz           
 ---------------------------------
- Sun Nov 23 16:00:00 4714 PST BC
+ 4714-11-23 18:40:40-05:19:20 BC
 (1 row)
 
 SELECT '4714-11-23 16:00:00-08 BC'::timestamptz;
            timestamptz           
 ---------------------------------
- Sun Nov 23 16:00:00 4714 PST BC
+ 4714-11-23 18:40:40-05:19:20 BC
 (1 row)
 
 SELECT 'Sun Nov 23 16:00:00 4714 PST BC'::timestamptz;
            timestamptz           
 ---------------------------------
- Sun Nov 23 16:00:00 4714 PST BC
+ 4714-11-23 18:40:40-05:19:20 BC
 (1 row)
 
 SELECT '4714-11-23 23:59:59+00 BC'::timestamptz;  -- out of range
@@ -336,58 +336,58 @@
 -- Demonstrate functions and operators
 SELECT '' AS "48", d1 FROM TIMESTAMPTZ_TBL
    WHERE d1 > timestamp with time zone '1997-01-02';
- 48 |               d1               
-----+--------------------------------
+ 48 |            d1            
+----+--------------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (50 rows)
 
 SELECT '' AS "15", d1 FROM TIMESTAMPTZ_TBL
@@ -395,27 +395,27 @@
  15 |               d1                
 ----+---------------------------------
     | -infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
+    | 1969-12-31 19:00:00-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
 (15 rows)
 
 SELECT '' AS one, d1 FROM TIMESTAMPTZ_TBL
    WHERE d1 = timestamp with time zone '1997-01-02';
- one |              d1              
------+------------------------------
-     | Thu Jan 02 00:00:00 1997 PST
+ one |           d1           
+-----+------------------------
+     | 1997-01-02 00:00:00-05
 (1 row)
 
 SELECT '' AS "63", d1 FROM TIMESTAMPTZ_TBL
@@ -424,69 +424,69 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1969-12-31 19:00:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (65 rows)
 
 SELECT '' AS "16", d1 FROM TIMESTAMPTZ_TBL
@@ -494,228 +494,228 @@
  16 |               d1                
 ----+---------------------------------
     | -infinity
-    | Wed Dec 31 16:00:00 1969 PST
-    | Thu Jan 02 00:00:00 1997 PST
-    | Tue Feb 16 17:32:01 0097 PST BC
-    | Sat Feb 16 17:32:01 0097 PST
-    | Thu Feb 16 17:32:01 0597 PST
-    | Tue Feb 16 17:32:01 1097 PST
-    | Sat Feb 16 17:32:01 1697 PST
-    | Thu Feb 16 17:32:01 1797 PST
-    | Tue Feb 16 17:32:01 1897 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Wed Jan 01 17:32:01 1997 PST
+    | 1969-12-31 19:00:00-05
+    | 1997-01-02 00:00:00-05
+    | 0097-02-16 17:32:01-05:19:20 BC
+    | 0097-02-16 17:32:01-05:19:20
+    | 0597-02-16 17:32:01-05:19:20
+    | 1097-02-16 17:32:01-05:19:20
+    | 1697-02-16 17:32:01-05:19:20
+    | 1797-02-16 17:32:01-05:19:20
+    | 1897-02-16 17:32:01-05:14
+    | 1996-02-28 17:32:01-05
+    | 1996-02-29 17:32:01-05
+    | 1996-03-01 17:32:01-05
+    | 1996-12-30 17:32:01-05
+    | 1996-12-31 17:32:01-05
+    | 1997-01-01 17:32:01-05
 (16 rows)
 
 SELECT '' AS "49", d1 FROM TIMESTAMPTZ_TBL
    WHERE d1 >= timestamp with time zone '1997-01-02';
- 49 |               d1               
-----+--------------------------------
+ 49 |            d1            
+----+--------------------------
     | infinity
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:02 1997 PST
-    | Mon Feb 10 17:32:01.4 1997 PST
-    | Mon Feb 10 17:32:01.5 1997 PST
-    | Mon Feb 10 17:32:01.6 1997 PST
-    | Thu Jan 02 00:00:00 1997 PST
-    | Thu Jan 02 03:04:05 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Jun 10 17:32:01 1997 PDT
-    | Sat Sep 22 18:19:20 2001 PDT
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 04:14:02 2000 PST
-    | Wed Mar 15 02:14:03 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 01:14:05 2000 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:00 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 17:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 09:32:01 1997 PST
-    | Mon Feb 10 14:32:01 1997 PST
-    | Thu Jul 10 14:32:01 1997 PDT
-    | Tue Jun 10 18:32:01 1997 PDT
-    | Mon Feb 10 17:32:01 1997 PST
-    | Tue Feb 11 17:32:01 1997 PST
-    | Wed Feb 12 17:32:01 1997 PST
-    | Thu Feb 13 17:32:01 1997 PST
-    | Fri Feb 14 17:32:01 1997 PST
-    | Sat Feb 15 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sun Feb 16 17:32:01 1997 PST
-    | Sat Feb 16 17:32:01 2097 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:02-05
+    | 1997-02-10 20:32:01.4-05
+    | 1997-02-10 20:32:01.5-05
+    | 1997-02-10 20:32:01.6-05
+    | 1997-01-02 00:00:00-05
+    | 1997-01-02 03:04:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-06-10 19:32:01-05
+    | 2001-09-22 18:19:20-05
+    | 2000-03-15 11:14:01-05
+    | 2000-03-15 07:14:02-05
+    | 2000-03-15 05:14:03-05
+    | 2000-03-15 06:14:04-05
+    | 2000-03-15 04:14:05-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-10 17:32:00-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-10-02 20:32:01-05
+    | 1997-02-10 20:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 12:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-07-10 16:32:01-05
+    | 1997-06-10 20:32:01-05
+    | 1997-02-10 17:32:01-05
+    | 1997-02-11 17:32:01-05
+    | 1997-02-12 17:32:01-05
+    | 1997-02-13 17:32:01-05
+    | 1997-02-14 17:32:01-05
+    | 1997-02-15 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 1997-02-16 17:32:01-05
+    | 2097-02-16 17:32:01-05
+    | 1997-02-28 17:32:01-05
+    | 1997-03-01 17:32:01-05
+    | 1997-12-30 17:32:01-05
+    | 1997-12-31 17:32:01-05
+    | 1999-12-31 17:32:01-05
+    | 2000-01-01 17:32:01-05
+    | 2000-12-31 17:32:01-05
+    | 2001-01-01 17:32:01-05
 (51 rows)
 
 SELECT '' AS "54", d1 - timestamp with time zone '1997-01-02' AS diff
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days 8 hours ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 16 hours 32 mins 1 sec
-    | @ 1724 days 17 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 4 hours 14 mins 2 secs
-    | @ 1168 days 2 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 1 hour 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 14 hours 32 mins 1 sec
-    | @ 189 days 13 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |         diff         
+----+----------------------
+    | -9863 days -05:00:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:02
+    | 39 days 20:32:01.4
+    | 39 days 20:32:01.5
+    | 39 days 20:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 159 days 19:32:01
+    | 1724 days 18:19:20
+    | 1168 days 11:14:01
+    | 1168 days 07:14:02
+    | 1168 days 05:14:03
+    | 1168 days 06:14:04
+    | 1168 days 04:14:05
+    | 39 days 20:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 273 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 17:32:01
+    | 189 days 16:32:01
+    | 159 days 20:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (56 rows)
 
 SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp with time zone '2004-02-29 15:44:17.71393' ) AS week_trunc;
- date_trunc_week |          week_trunc          
------------------+------------------------------
-                 | Mon Feb 23 00:00:00 2004 PST
+ date_trunc_week |       week_trunc       
+-----------------+------------------------
+                 | 2004-02-23 00:00:00-05
 (1 row)
 
 SELECT '' AS date_trunc_at_tz, date_trunc('day', timestamp with time zone '2001-02-16 20:38:40+00', 'Australia/Sydney') as sydney_trunc;  -- zone name
- date_trunc_at_tz |         sydney_trunc         
-------------------+------------------------------
-                  | Fri Feb 16 05:00:00 2001 PST
+ date_trunc_at_tz |      sydney_trunc      
+------------------+------------------------
+                  | 2001-02-16 08:00:00-05
 (1 row)
 
 SELECT '' AS date_trunc_at_tz, date_trunc('day', timestamp with time zone '2001-02-16 20:38:40+00', 'GMT') as gmt_trunc;  -- fixed-offset abbreviation
- date_trunc_at_tz |          gmt_trunc           
-------------------+------------------------------
-                  | Thu Feb 15 16:00:00 2001 PST
+ date_trunc_at_tz |       gmt_trunc        
+------------------+------------------------
+                  | 2001-02-15 19:00:00-05
 (1 row)
 
 SELECT '' AS date_trunc_at_tz, date_trunc('day', timestamp with time zone '2001-02-16 20:38:40+00', 'VET') as vet_trunc;  -- variable-offset abbreviation
- date_trunc_at_tz |          vet_trunc           
-------------------+------------------------------
-                  | Thu Feb 15 20:00:00 2001 PST
+ date_trunc_at_tz |       vet_trunc        
+------------------+------------------------
+                  | 2001-02-15 23:00:00-05
 (1 row)
 
 -- Test casting within a BETWEEN qualifier
 SELECT '' AS "54", d1 - timestamp with time zone '1997-01-02' AS diff
   FROM TIMESTAMPTZ_TBL
   WHERE d1 BETWEEN timestamp with time zone '1902-01-01' AND timestamp with time zone '2038-01-01';
- 54 |                  diff                  
-----+----------------------------------------
-    | @ 9863 days 8 hours ago
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 2 secs
-    | @ 39 days 17 hours 32 mins 1.4 secs
-    | @ 39 days 17 hours 32 mins 1.5 secs
-    | @ 39 days 17 hours 32 mins 1.6 secs
-    | @ 0
-    | @ 3 hours 4 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 159 days 16 hours 32 mins 1 sec
-    | @ 1724 days 17 hours 19 mins 20 secs
-    | @ 1168 days 8 hours 14 mins 1 sec
-    | @ 1168 days 4 hours 14 mins 2 secs
-    | @ 1168 days 2 hours 14 mins 3 secs
-    | @ 1168 days 3 hours 14 mins 4 secs
-    | @ 1168 days 1 hour 14 mins 5 secs
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 9 hours 32 mins 1 sec
-    | @ 39 days 14 hours 32 mins 1 sec
-    | @ 189 days 13 hours 32 mins 1 sec
-    | @ 159 days 17 hours 32 mins 1 sec
-    | @ 39 days 17 hours 32 mins 1 sec
-    | @ 40 days 17 hours 32 mins 1 sec
-    | @ 41 days 17 hours 32 mins 1 sec
-    | @ 42 days 17 hours 32 mins 1 sec
-    | @ 43 days 17 hours 32 mins 1 sec
-    | @ 44 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 45 days 17 hours 32 mins 1 sec
-    | @ 308 days 6 hours 27 mins 59 secs ago
-    | @ 307 days 6 hours 27 mins 59 secs ago
-    | @ 306 days 6 hours 27 mins 59 secs ago
-    | @ 2 days 6 hours 27 mins 59 secs ago
-    | @ 1 day 6 hours 27 mins 59 secs ago
-    | @ 6 hours 27 mins 59 secs ago
-    | @ 57 days 17 hours 32 mins 1 sec
-    | @ 58 days 17 hours 32 mins 1 sec
-    | @ 362 days 17 hours 32 mins 1 sec
-    | @ 363 days 17 hours 32 mins 1 sec
-    | @ 1093 days 17 hours 32 mins 1 sec
-    | @ 1094 days 17 hours 32 mins 1 sec
-    | @ 1459 days 17 hours 32 mins 1 sec
-    | @ 1460 days 17 hours 32 mins 1 sec
+ 54 |         diff         
+----+----------------------
+    | -9863 days -05:00:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:02
+    | 39 days 20:32:01.4
+    | 39 days 20:32:01.5
+    | 39 days 20:32:01.6
+    | 00:00:00
+    | 03:04:05
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 159 days 19:32:01
+    | 1724 days 18:19:20
+    | 1168 days 11:14:01
+    | 1168 days 07:14:02
+    | 1168 days 05:14:03
+    | 1168 days 06:14:04
+    | 1168 days 04:14:05
+    | 39 days 20:32:01
+    | 39 days 17:32:01
+    | 39 days 17:32:00
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 20:32:01
+    | 273 days 20:32:01
+    | 39 days 20:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 12:32:01
+    | 39 days 17:32:01
+    | 189 days 16:32:01
+    | 159 days 20:32:01
+    | 39 days 17:32:01
+    | 40 days 17:32:01
+    | 41 days 17:32:01
+    | 42 days 17:32:01
+    | 43 days 17:32:01
+    | 44 days 17:32:01
+    | 45 days 17:32:01
+    | 45 days 17:32:01
+    | -308 days -06:27:59
+    | -307 days -06:27:59
+    | -306 days -06:27:59
+    | -2 days -06:27:59
+    | -1 days -06:27:59
+    | -06:27:59
+    | 57 days 17:32:01
+    | 58 days 17:32:01
+    | 362 days 17:32:01
+    | 363 days 17:32:01
+    | 1093 days 17:32:01
+    | 1094 days 17:32:01
+    | 1459 days 17:32:01
+    | 1460 days 17:32:01
 (56 rows)
 
 SELECT '' AS "54", d1 as timestamptz,
@@ -723,192 +723,192 @@
    date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
    date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |          timestamptz           | year | month | day | hour | minute | second 
-----+--------------------------------+------+-------+-----+------+--------+--------
-    | Wed Dec 31 16:00:00 1969 PST   | 1969 |    12 |  31 |   16 |      0 |      0
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:02 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      2
-    | Mon Feb 10 17:32:01.4 1997 PST | 1997 |     2 |  10 |   17 |     32 |    1.4
-    | Mon Feb 10 17:32:01.5 1997 PST | 1997 |     2 |  10 |   17 |     32 |    1.5
-    | Mon Feb 10 17:32:01.6 1997 PST | 1997 |     2 |  10 |   17 |     32 |    1.6
-    | Thu Jan 02 00:00:00 1997 PST   | 1997 |     1 |   2 |    0 |      0 |      0
-    | Thu Jan 02 03:04:05 1997 PST   | 1997 |     1 |   2 |    3 |      4 |      5
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Jun 10 17:32:01 1997 PDT   | 1997 |     6 |  10 |   17 |     32 |      1
-    | Sat Sep 22 18:19:20 2001 PDT   | 2001 |     9 |  22 |   18 |     19 |     20
-    | Wed Mar 15 08:14:01 2000 PST   | 2000 |     3 |  15 |    8 |     14 |      1
-    | Wed Mar 15 04:14:02 2000 PST   | 2000 |     3 |  15 |    4 |     14 |      2
-    | Wed Mar 15 02:14:03 2000 PST   | 2000 |     3 |  15 |    2 |     14 |      3
-    | Wed Mar 15 03:14:04 2000 PST   | 2000 |     3 |  15 |    3 |     14 |      4
-    | Wed Mar 15 01:14:05 2000 PST   | 2000 |     3 |  15 |    1 |     14 |      5
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:00 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      0
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Mon Feb 10 09:32:01 1997 PST   | 1997 |     2 |  10 |    9 |     32 |      1
-    | Mon Feb 10 09:32:01 1997 PST   | 1997 |     2 |  10 |    9 |     32 |      1
-    | Mon Feb 10 09:32:01 1997 PST   | 1997 |     2 |  10 |    9 |     32 |      1
-    | Mon Feb 10 14:32:01 1997 PST   | 1997 |     2 |  10 |   14 |     32 |      1
-    | Thu Jul 10 14:32:01 1997 PDT   | 1997 |     7 |  10 |   14 |     32 |      1
-    | Tue Jun 10 18:32:01 1997 PDT   | 1997 |     6 |  10 |   18 |     32 |      1
-    | Mon Feb 10 17:32:01 1997 PST   | 1997 |     2 |  10 |   17 |     32 |      1
-    | Tue Feb 11 17:32:01 1997 PST   | 1997 |     2 |  11 |   17 |     32 |      1
-    | Wed Feb 12 17:32:01 1997 PST   | 1997 |     2 |  12 |   17 |     32 |      1
-    | Thu Feb 13 17:32:01 1997 PST   | 1997 |     2 |  13 |   17 |     32 |      1
-    | Fri Feb 14 17:32:01 1997 PST   | 1997 |     2 |  14 |   17 |     32 |      1
-    | Sat Feb 15 17:32:01 1997 PST   | 1997 |     2 |  15 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997 PST   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Sun Feb 16 17:32:01 1997 PST   | 1997 |     2 |  16 |   17 |     32 |      1
-    | Wed Feb 28 17:32:01 1996 PST   | 1996 |     2 |  28 |   17 |     32 |      1
-    | Thu Feb 29 17:32:01 1996 PST   | 1996 |     2 |  29 |   17 |     32 |      1
-    | Fri Mar 01 17:32:01 1996 PST   | 1996 |     3 |   1 |   17 |     32 |      1
-    | Mon Dec 30 17:32:01 1996 PST   | 1996 |    12 |  30 |   17 |     32 |      1
-    | Tue Dec 31 17:32:01 1996 PST   | 1996 |    12 |  31 |   17 |     32 |      1
-    | Wed Jan 01 17:32:01 1997 PST   | 1997 |     1 |   1 |   17 |     32 |      1
-    | Fri Feb 28 17:32:01 1997 PST   | 1997 |     2 |  28 |   17 |     32 |      1
-    | Sat Mar 01 17:32:01 1997 PST   | 1997 |     3 |   1 |   17 |     32 |      1
-    | Tue Dec 30 17:32:01 1997 PST   | 1997 |    12 |  30 |   17 |     32 |      1
-    | Wed Dec 31 17:32:01 1997 PST   | 1997 |    12 |  31 |   17 |     32 |      1
-    | Fri Dec 31 17:32:01 1999 PST   | 1999 |    12 |  31 |   17 |     32 |      1
-    | Sat Jan 01 17:32:01 2000 PST   | 2000 |     1 |   1 |   17 |     32 |      1
-    | Sun Dec 31 17:32:01 2000 PST   | 2000 |    12 |  31 |   17 |     32 |      1
-    | Mon Jan 01 17:32:01 2001 PST   | 2001 |     1 |   1 |   17 |     32 |      1
+ 54 |       timestamptz        | year | month | day | hour | minute | second 
+----+--------------------------+------+-------+-----+------+--------+--------
+    | 1969-12-31 19:00:00-05   | 1969 |    12 |  31 |   19 |      0 |      0
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:02-05   | 1997 |     2 |  10 |   20 |     32 |      2
+    | 1997-02-10 20:32:01.4-05 | 1997 |     2 |  10 |   20 |     32 |    1.4
+    | 1997-02-10 20:32:01.5-05 | 1997 |     2 |  10 |   20 |     32 |    1.5
+    | 1997-02-10 20:32:01.6-05 | 1997 |     2 |  10 |   20 |     32 |    1.6
+    | 1997-01-02 00:00:00-05   | 1997 |     1 |   2 |    0 |      0 |      0
+    | 1997-01-02 03:04:05-05   | 1997 |     1 |   2 |    3 |      4 |      5
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-06-10 19:32:01-05   | 1997 |     6 |  10 |   19 |     32 |      1
+    | 2001-09-22 18:19:20-05   | 2001 |     9 |  22 |   18 |     19 |     20
+    | 2000-03-15 11:14:01-05   | 2000 |     3 |  15 |   11 |     14 |      1
+    | 2000-03-15 07:14:02-05   | 2000 |     3 |  15 |    7 |     14 |      2
+    | 2000-03-15 05:14:03-05   | 2000 |     3 |  15 |    5 |     14 |      3
+    | 2000-03-15 06:14:04-05   | 2000 |     3 |  15 |    6 |     14 |      4
+    | 2000-03-15 04:14:05-05   | 2000 |     3 |  15 |    4 |     14 |      5
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 17:32:01-05   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-10 17:32:00-05   | 1997 |     2 |  10 |   17 |     32 |      0
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-10-02 20:32:01-05   | 1997 |    10 |   2 |   20 |     32 |      1
+    | 1997-02-10 20:32:01-05   | 1997 |     2 |  10 |   20 |     32 |      1
+    | 1997-02-10 12:32:01-05   | 1997 |     2 |  10 |   12 |     32 |      1
+    | 1997-02-10 12:32:01-05   | 1997 |     2 |  10 |   12 |     32 |      1
+    | 1997-02-10 12:32:01-05   | 1997 |     2 |  10 |   12 |     32 |      1
+    | 1997-02-10 17:32:01-05   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-07-10 16:32:01-05   | 1997 |     7 |  10 |   16 |     32 |      1
+    | 1997-06-10 20:32:01-05   | 1997 |     6 |  10 |   20 |     32 |      1
+    | 1997-02-10 17:32:01-05   | 1997 |     2 |  10 |   17 |     32 |      1
+    | 1997-02-11 17:32:01-05   | 1997 |     2 |  11 |   17 |     32 |      1
+    | 1997-02-12 17:32:01-05   | 1997 |     2 |  12 |   17 |     32 |      1
+    | 1997-02-13 17:32:01-05   | 1997 |     2 |  13 |   17 |     32 |      1
+    | 1997-02-14 17:32:01-05   | 1997 |     2 |  14 |   17 |     32 |      1
+    | 1997-02-15 17:32:01-05   | 1997 |     2 |  15 |   17 |     32 |      1
+    | 1997-02-16 17:32:01-05   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1997-02-16 17:32:01-05   | 1997 |     2 |  16 |   17 |     32 |      1
+    | 1996-02-28 17:32:01-05   | 1996 |     2 |  28 |   17 |     32 |      1
+    | 1996-02-29 17:32:01-05   | 1996 |     2 |  29 |   17 |     32 |      1
+    | 1996-03-01 17:32:01-05   | 1996 |     3 |   1 |   17 |     32 |      1
+    | 1996-12-30 17:32:01-05   | 1996 |    12 |  30 |   17 |     32 |      1
+    | 1996-12-31 17:32:01-05   | 1996 |    12 |  31 |   17 |     32 |      1
+    | 1997-01-01 17:32:01-05   | 1997 |     1 |   1 |   17 |     32 |      1
+    | 1997-02-28 17:32:01-05   | 1997 |     2 |  28 |   17 |     32 |      1
+    | 1997-03-01 17:32:01-05   | 1997 |     3 |   1 |   17 |     32 |      1
+    | 1997-12-30 17:32:01-05   | 1997 |    12 |  30 |   17 |     32 |      1
+    | 1997-12-31 17:32:01-05   | 1997 |    12 |  31 |   17 |     32 |      1
+    | 1999-12-31 17:32:01-05   | 1999 |    12 |  31 |   17 |     32 |      1
+    | 2000-01-01 17:32:01-05   | 2000 |     1 |   1 |   17 |     32 |      1
+    | 2000-12-31 17:32:01-05   | 2000 |    12 |  31 |   17 |     32 |      1
+    | 2001-01-01 17:32:01-05   | 2001 |     1 |   1 |   17 |     32 |      1
 (56 rows)
 
 SELECT '' AS "54", d1 as timestamptz,
    date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
    date_part( 'usec', d1) AS usec
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |          timestamptz           | quarter | msec  |   usec   
-----+--------------------------------+---------+-------+----------
-    | Wed Dec 31 16:00:00 1969 PST   |       4 |     0 |        0
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:02 1997 PST   |       1 |  2000 |  2000000
-    | Mon Feb 10 17:32:01.4 1997 PST |       1 |  1400 |  1400000
-    | Mon Feb 10 17:32:01.5 1997 PST |       1 |  1500 |  1500000
-    | Mon Feb 10 17:32:01.6 1997 PST |       1 |  1600 |  1600000
-    | Thu Jan 02 00:00:00 1997 PST   |       1 |     0 |        0
-    | Thu Jan 02 03:04:05 1997 PST   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Tue Jun 10 17:32:01 1997 PDT   |       2 |  1000 |  1000000
-    | Sat Sep 22 18:19:20 2001 PDT   |       3 | 20000 | 20000000
-    | Wed Mar 15 08:14:01 2000 PST   |       1 |  1000 |  1000000
-    | Wed Mar 15 04:14:02 2000 PST   |       1 |  2000 |  2000000
-    | Wed Mar 15 02:14:03 2000 PST   |       1 |  3000 |  3000000
-    | Wed Mar 15 03:14:04 2000 PST   |       1 |  4000 |  4000000
-    | Wed Mar 15 01:14:05 2000 PST   |       1 |  5000 |  5000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:00 1997 PST   |       1 |     0 |        0
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 09:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 09:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 09:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Mon Feb 10 14:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Thu Jul 10 14:32:01 1997 PDT   |       3 |  1000 |  1000000
-    | Tue Jun 10 18:32:01 1997 PDT   |       2 |  1000 |  1000000
-    | Mon Feb 10 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Tue Feb 11 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Wed Feb 12 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Thu Feb 13 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Fri Feb 14 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sat Feb 15 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sun Feb 16 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Wed Feb 28 17:32:01 1996 PST   |       1 |  1000 |  1000000
-    | Thu Feb 29 17:32:01 1996 PST   |       1 |  1000 |  1000000
-    | Fri Mar 01 17:32:01 1996 PST   |       1 |  1000 |  1000000
-    | Mon Dec 30 17:32:01 1996 PST   |       4 |  1000 |  1000000
-    | Tue Dec 31 17:32:01 1996 PST   |       4 |  1000 |  1000000
-    | Wed Jan 01 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Fri Feb 28 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Sat Mar 01 17:32:01 1997 PST   |       1 |  1000 |  1000000
-    | Tue Dec 30 17:32:01 1997 PST   |       4 |  1000 |  1000000
-    | Wed Dec 31 17:32:01 1997 PST   |       4 |  1000 |  1000000
-    | Fri Dec 31 17:32:01 1999 PST   |       4 |  1000 |  1000000
-    | Sat Jan 01 17:32:01 2000 PST   |       1 |  1000 |  1000000
-    | Sun Dec 31 17:32:01 2000 PST   |       4 |  1000 |  1000000
-    | Mon Jan 01 17:32:01 2001 PST   |       1 |  1000 |  1000000
+ 54 |       timestamptz        | quarter | msec  |   usec   
+----+--------------------------+---------+-------+----------
+    | 1969-12-31 19:00:00-05   |       4 |     0 |        0
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:02-05   |       1 |  2000 |  2000000
+    | 1997-02-10 20:32:01.4-05 |       1 |  1400 |  1400000
+    | 1997-02-10 20:32:01.5-05 |       1 |  1500 |  1500000
+    | 1997-02-10 20:32:01.6-05 |       1 |  1600 |  1600000
+    | 1997-01-02 00:00:00-05   |       1 |     0 |        0
+    | 1997-01-02 03:04:05-05   |       1 |  5000 |  5000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-06-10 19:32:01-05   |       2 |  1000 |  1000000
+    | 2001-09-22 18:19:20-05   |       3 | 20000 | 20000000
+    | 2000-03-15 11:14:01-05   |       1 |  1000 |  1000000
+    | 2000-03-15 07:14:02-05   |       1 |  2000 |  2000000
+    | 2000-03-15 05:14:03-05   |       1 |  3000 |  3000000
+    | 2000-03-15 06:14:04-05   |       1 |  4000 |  4000000
+    | 2000-03-15 04:14:05-05   |       1 |  5000 |  5000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:00-05   |       1 |     0 |        0
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-10-02 20:32:01-05   |       4 |  1000 |  1000000
+    | 1997-02-10 20:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 12:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 12:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 12:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-10 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-07-10 16:32:01-05   |       3 |  1000 |  1000000
+    | 1997-06-10 20:32:01-05   |       2 |  1000 |  1000000
+    | 1997-02-10 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-11 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-12 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-13 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-14 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-15 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-16 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-02-28 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-02-29 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-03-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 1996-12-30 17:32:01-05   |       4 |  1000 |  1000000
+    | 1996-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 1997-01-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-02-28 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-03-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 1997-12-30 17:32:01-05   |       4 |  1000 |  1000000
+    | 1997-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 1999-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 2000-01-01 17:32:01-05   |       1 |  1000 |  1000000
+    | 2000-12-31 17:32:01-05   |       4 |  1000 |  1000000
+    | 2001-01-01 17:32:01-05   |       1 |  1000 |  1000000
 (56 rows)
 
 SELECT '' AS "54", d1 as timestamptz,
    date_part( 'isoyear', d1) AS isoyear, date_part( 'week', d1) AS week,
    date_part( 'dow', d1) AS dow
    FROM TIMESTAMPTZ_TBL WHERE d1 BETWEEN '1902-01-01' AND '2038-01-01';
- 54 |          timestamptz           | isoyear | week | dow 
-----+--------------------------------+---------+------+-----
-    | Wed Dec 31 16:00:00 1969 PST   |    1970 |    1 |   3
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:02 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.4 1997 PST |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.5 1997 PST |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01.6 1997 PST |    1997 |    7 |   1
-    | Thu Jan 02 00:00:00 1997 PST   |    1997 |    1 |   4
-    | Thu Jan 02 03:04:05 1997 PST   |    1997 |    1 |   4
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Tue Jun 10 17:32:01 1997 PDT   |    1997 |   24 |   2
-    | Sat Sep 22 18:19:20 2001 PDT   |    2001 |   38 |   6
-    | Wed Mar 15 08:14:01 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 04:14:02 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 02:14:03 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 03:14:04 2000 PST   |    2000 |   11 |   3
-    | Wed Mar 15 01:14:05 2000 PST   |    2000 |   11 |   3
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:00 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 09:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 09:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 09:32:01 1997 PST   |    1997 |    7 |   1
-    | Mon Feb 10 14:32:01 1997 PST   |    1997 |    7 |   1
-    | Thu Jul 10 14:32:01 1997 PDT   |    1997 |   28 |   4
-    | Tue Jun 10 18:32:01 1997 PDT   |    1997 |   24 |   2
-    | Mon Feb 10 17:32:01 1997 PST   |    1997 |    7 |   1
-    | Tue Feb 11 17:32:01 1997 PST   |    1997 |    7 |   2
-    | Wed Feb 12 17:32:01 1997 PST   |    1997 |    7 |   3
-    | Thu Feb 13 17:32:01 1997 PST   |    1997 |    7 |   4
-    | Fri Feb 14 17:32:01 1997 PST   |    1997 |    7 |   5
-    | Sat Feb 15 17:32:01 1997 PST   |    1997 |    7 |   6
-    | Sun Feb 16 17:32:01 1997 PST   |    1997 |    7 |   0
-    | Sun Feb 16 17:32:01 1997 PST   |    1997 |    7 |   0
-    | Wed Feb 28 17:32:01 1996 PST   |    1996 |    9 |   3
-    | Thu Feb 29 17:32:01 1996 PST   |    1996 |    9 |   4
-    | Fri Mar 01 17:32:01 1996 PST   |    1996 |    9 |   5
-    | Mon Dec 30 17:32:01 1996 PST   |    1997 |    1 |   1
-    | Tue Dec 31 17:32:01 1996 PST   |    1997 |    1 |   2
-    | Wed Jan 01 17:32:01 1997 PST   |    1997 |    1 |   3
-    | Fri Feb 28 17:32:01 1997 PST   |    1997 |    9 |   5
-    | Sat Mar 01 17:32:01 1997 PST   |    1997 |    9 |   6
-    | Tue Dec 30 17:32:01 1997 PST   |    1998 |    1 |   2
-    | Wed Dec 31 17:32:01 1997 PST   |    1998 |    1 |   3
-    | Fri Dec 31 17:32:01 1999 PST   |    1999 |   52 |   5
-    | Sat Jan 01 17:32:01 2000 PST   |    1999 |   52 |   6
-    | Sun Dec 31 17:32:01 2000 PST   |    2000 |   52 |   0
-    | Mon Jan 01 17:32:01 2001 PST   |    2001 |    1 |   1
+ 54 |       timestamptz        | isoyear | week | dow 
+----+--------------------------+---------+------+-----
+    | 1969-12-31 19:00:00-05   |    1970 |    1 |   3
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:02-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01.4-05 |    1997 |    7 |   1
+    | 1997-02-10 20:32:01.5-05 |    1997 |    7 |   1
+    | 1997-02-10 20:32:01.6-05 |    1997 |    7 |   1
+    | 1997-01-02 00:00:00-05   |    1997 |    1 |   4
+    | 1997-01-02 03:04:05-05   |    1997 |    1 |   4
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-06-10 19:32:01-05   |    1997 |   24 |   2
+    | 2001-09-22 18:19:20-05   |    2001 |   38 |   6
+    | 2000-03-15 11:14:01-05   |    2000 |   11 |   3
+    | 2000-03-15 07:14:02-05   |    2000 |   11 |   3
+    | 2000-03-15 05:14:03-05   |    2000 |   11 |   3
+    | 2000-03-15 06:14:04-05   |    2000 |   11 |   3
+    | 2000-03-15 04:14:05-05   |    2000 |   11 |   3
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 17:32:00-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-10-02 20:32:01-05   |    1997 |   40 |   4
+    | 1997-02-10 20:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 12:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 12:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 12:32:01-05   |    1997 |    7 |   1
+    | 1997-02-10 17:32:01-05   |    1997 |    7 |   1
+    | 1997-07-10 16:32:01-05   |    1997 |   28 |   4
+    | 1997-06-10 20:32:01-05   |    1997 |   24 |   2
+    | 1997-02-10 17:32:01-05   |    1997 |    7 |   1
+    | 1997-02-11 17:32:01-05   |    1997 |    7 |   2
+    | 1997-02-12 17:32:01-05   |    1997 |    7 |   3
+    | 1997-02-13 17:32:01-05   |    1997 |    7 |   4
+    | 1997-02-14 17:32:01-05   |    1997 |    7 |   5
+    | 1997-02-15 17:32:01-05   |    1997 |    7 |   6
+    | 1997-02-16 17:32:01-05   |    1997 |    7 |   0
+    | 1997-02-16 17:32:01-05   |    1997 |    7 |   0
+    | 1996-02-28 17:32:01-05   |    1996 |    9 |   3
+    | 1996-02-29 17:32:01-05   |    1996 |    9 |   4
+    | 1996-03-01 17:32:01-05   |    1996 |    9 |   5
+    | 1996-12-30 17:32:01-05   |    1997 |    1 |   1
+    | 1996-12-31 17:32:01-05   |    1997 |    1 |   2
+    | 1997-01-01 17:32:01-05   |    1997 |    1 |   3
+    | 1997-02-28 17:32:01-05   |    1997 |    9 |   5
+    | 1997-03-01 17:32:01-05   |    1997 |    9 |   6
+    | 1997-12-30 17:32:01-05   |    1998 |    1 |   2
+    | 1997-12-31 17:32:01-05   |    1998 |    1 |   3
+    | 1999-12-31 17:32:01-05   |    1999 |   52 |   5
+    | 2000-01-01 17:32:01-05   |    1999 |   52 |   6
+    | 2000-12-31 17:32:01-05   |    2000 |   52 |   0
+    | 2001-01-01 17:32:01-05   |    2001 |    1 |   1
 (56 rows)
 
 -- TO_CHAR()
@@ -944,7 +944,7 @@
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
-           | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
+           | THURSDAY  Thursday  thursday  THU Thu thu OCTOBER   October   october   X    OCT Oct oct
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
            | MONDAY    Monday    monday    MON Mon mon FEBRUARY  February  february  II   FEB Feb feb
@@ -1016,7 +1016,7 @@
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
-           | MONDAY Monday monday FEBRUARY February february II
+           | THURSDAY Thursday thursday OCTOBER October october X
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
            | MONDAY Monday monday FEBRUARY February february II
@@ -1088,7 +1088,7 @@
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 02 5 2450724
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
            | 1,997 1997 997 97 7 20 1 02 06 041 10 2 2450490
@@ -1160,7 +1160,7 @@
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
-           | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
+           | 1,997 1997 997 97 7 20 4 10 40 275 2 5 2450724
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
            | 1,997 1997 997 97 7 20 1 2 6 41 10 2 2450490
@@ -1206,40 +1206,40 @@
 -----------+----------------------
            | 
            | 
-           | 04 04 16 00 00 57600
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 02 63122
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
+           | 07 07 19 00 00 68400
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 02 73922
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
            | 12 12 00 00 00 0
            | 03 03 03 04 05 11045
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 07 07 19 32 01 70321
            | 06 06 18 19 20 65960
-           | 08 08 08 14 01 29641
-           | 04 04 04 14 02 15242
-           | 02 02 02 14 03 8043
-           | 03 03 03 14 04 11644
-           | 01 01 01 14 05 4445
-           | 05 05 17 32 01 63121
+           | 11 11 11 14 01 40441
+           | 07 07 07 14 02 26042
+           | 05 05 05 14 03 18843
+           | 06 06 06 14 04 22444
+           | 04 04 04 14 05 15245
+           | 08 08 20 32 01 73921
            | 05 05 17 32 01 63121
            | 05 05 17 32 00 63120
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 08 08 20 32 01 73921
+           | 12 12 12 32 01 45121
+           | 12 12 12 32 01 45121
+           | 12 12 12 32 01 45121
            | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 05 05 17 32 01 63121
-           | 09 09 09 32 01 34321
-           | 09 09 09 32 01 34321
-           | 09 09 09 32 01 34321
-           | 02 02 14 32 01 52321
-           | 02 02 14 32 01 52321
-           | 06 06 18 32 01 66721
+           | 04 04 16 32 01 59521
+           | 08 08 20 32 01 73921
            | 05 05 17 32 01 63121
            | 05 05 17 32 01 63121
            | 05 05 17 32 01 63121
@@ -1278,40 +1278,40 @@
 -----------+-------------------------------------------------
            | 
            | 
-           | HH:MI:SS is 04:00:00 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:02 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
+           | HH:MI:SS is 07:00:00 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:02 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
            | HH:MI:SS is 12:00:00 "text between quote marks"
            | HH:MI:SS is 03:04:05 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 07:32:01 "text between quote marks"
            | HH:MI:SS is 06:19:20 "text between quote marks"
-           | HH:MI:SS is 08:14:01 "text between quote marks"
-           | HH:MI:SS is 04:14:02 "text between quote marks"
-           | HH:MI:SS is 02:14:03 "text between quote marks"
-           | HH:MI:SS is 03:14:04 "text between quote marks"
-           | HH:MI:SS is 01:14:05 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
+           | HH:MI:SS is 11:14:01 "text between quote marks"
+           | HH:MI:SS is 07:14:02 "text between quote marks"
+           | HH:MI:SS is 05:14:03 "text between quote marks"
+           | HH:MI:SS is 06:14:04 "text between quote marks"
+           | HH:MI:SS is 04:14:05 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:00 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
+           | HH:MI:SS is 12:32:01 "text between quote marks"
+           | HH:MI:SS is 12:32:01 "text between quote marks"
+           | HH:MI:SS is 12:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 05:32:01 "text between quote marks"
-           | HH:MI:SS is 09:32:01 "text between quote marks"
-           | HH:MI:SS is 09:32:01 "text between quote marks"
-           | HH:MI:SS is 09:32:01 "text between quote marks"
-           | HH:MI:SS is 02:32:01 "text between quote marks"
-           | HH:MI:SS is 02:32:01 "text between quote marks"
-           | HH:MI:SS is 06:32:01 "text between quote marks"
+           | HH:MI:SS is 04:32:01 "text between quote marks"
+           | HH:MI:SS is 08:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
            | HH:MI:SS is 05:32:01 "text between quote marks"
@@ -1350,40 +1350,40 @@
 -----------+------------------------
            | 
            | 
-           | 16--text--00--text--00
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--02
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
+           | 19--text--00--text--00
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--02
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
            | 00--text--00--text--00
            | 03--text--04--text--05
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 19--text--32--text--01
            | 18--text--19--text--20
-           | 08--text--14--text--01
-           | 04--text--14--text--02
-           | 02--text--14--text--03
-           | 03--text--14--text--04
-           | 01--text--14--text--05
-           | 17--text--32--text--01
+           | 11--text--14--text--01
+           | 07--text--14--text--02
+           | 05--text--14--text--03
+           | 06--text--14--text--04
+           | 04--text--14--text--05
+           | 20--text--32--text--01
            | 17--text--32--text--01
            | 17--text--32--text--00
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 20--text--32--text--01
+           | 12--text--32--text--01
+           | 12--text--32--text--01
+           | 12--text--32--text--01
            | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 17--text--32--text--01
-           | 09--text--32--text--01
-           | 09--text--32--text--01
-           | 09--text--32--text--01
-           | 14--text--32--text--01
-           | 14--text--32--text--01
-           | 18--text--32--text--01
+           | 16--text--32--text--01
+           | 20--text--32--text--01
            | 17--text--32--text--01
            | 17--text--32--text--01
            | 17--text--32--text--01
@@ -1448,7 +1448,7 @@
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
-           | 1997TH 1997th 2450490th
+           | 1997TH 1997th 2450724th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
            | 1997TH 1997th 2450490th
@@ -1494,40 +1494,40 @@
 -----------+---------------------------------------------------------------------
            | 
            | 
-           | 1969 A.D. 1969 a.d. 1969 ad 04:00:00 P.M. 04:00:00 p.m. 04:00:00 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:02 P.M. 05:32:02 p.m. 05:32:02 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
+           | 1969 A.D. 1969 a.d. 1969 ad 07:00:00 P.M. 07:00:00 p.m. 07:00:00 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:02 P.M. 08:32:02 p.m. 08:32:02 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 12:00:00 A.M. 12:00:00 a.m. 12:00:00 am
            | 1997 A.D. 1997 a.d. 1997 ad 03:04:05 A.M. 03:04:05 a.m. 03:04:05 am
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 07:32:01 P.M. 07:32:01 p.m. 07:32:01 pm
            | 2001 A.D. 2001 a.d. 2001 ad 06:19:20 P.M. 06:19:20 p.m. 06:19:20 pm
-           | 2000 A.D. 2000 a.d. 2000 ad 08:14:01 A.M. 08:14:01 a.m. 08:14:01 am
-           | 2000 A.D. 2000 a.d. 2000 ad 04:14:02 A.M. 04:14:02 a.m. 04:14:02 am
-           | 2000 A.D. 2000 a.d. 2000 ad 02:14:03 A.M. 02:14:03 a.m. 02:14:03 am
-           | 2000 A.D. 2000 a.d. 2000 ad 03:14:04 A.M. 03:14:04 a.m. 03:14:04 am
-           | 2000 A.D. 2000 a.d. 2000 ad 01:14:05 A.M. 01:14:05 a.m. 01:14:05 am
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
+           | 2000 A.D. 2000 a.d. 2000 ad 11:14:01 A.M. 11:14:01 a.m. 11:14:01 am
+           | 2000 A.D. 2000 a.d. 2000 ad 07:14:02 A.M. 07:14:02 a.m. 07:14:02 am
+           | 2000 A.D. 2000 a.d. 2000 ad 05:14:03 A.M. 05:14:03 a.m. 05:14:03 am
+           | 2000 A.D. 2000 a.d. 2000 ad 06:14:04 A.M. 06:14:04 a.m. 06:14:04 am
+           | 2000 A.D. 2000 a.d. 2000 ad 04:14:05 A.M. 04:14:05 a.m. 04:14:05 am
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:00 P.M. 05:32:00 p.m. 05:32:00 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 12:32:01 P.M. 12:32:01 p.m. 12:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 12:32:01 P.M. 12:32:01 p.m. 12:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 12:32:01 P.M. 12:32:01 p.m. 12:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 09:32:01 A.M. 09:32:01 a.m. 09:32:01 am
-           | 1997 A.D. 1997 a.d. 1997 ad 09:32:01 A.M. 09:32:01 a.m. 09:32:01 am
-           | 1997 A.D. 1997 a.d. 1997 ad 09:32:01 A.M. 09:32:01 a.m. 09:32:01 am
-           | 1997 A.D. 1997 a.d. 1997 ad 02:32:01 P.M. 02:32:01 p.m. 02:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 02:32:01 P.M. 02:32:01 p.m. 02:32:01 pm
-           | 1997 A.D. 1997 a.d. 1997 ad 06:32:01 P.M. 06:32:01 p.m. 06:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 04:32:01 P.M. 04:32:01 p.m. 04:32:01 pm
+           | 1997 A.D. 1997 a.d. 1997 ad 08:32:01 P.M. 08:32:01 p.m. 08:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
            | 1997 A.D. 1997 a.d. 1997 ad 05:32:01 P.M. 05:32:01 p.m. 05:32:01 pm
@@ -1592,7 +1592,7 @@
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
-            | 1997 997 97 7 07 043 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
             | 1997 997 97 7 07 043 1
@@ -1664,7 +1664,7 @@
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
-            | 1997 997 97 7 7 43 1
+            | 1997 997 97 7 40 277 4
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
             | 1997 997 97 7 7 43 1
@@ -1779,14 +1779,14 @@
 INSERT INTO TIMESTAMPTZ_TST VALUES(4, '1000000312 23:58:48 IST');
 --Verify data
 SELECT * FROM TIMESTAMPTZ_TST ORDER BY a;
- a |               b                
----+--------------------------------
- 1 | Wed Mar 12 13:58:48 1000 PST
- 2 | Sun Mar 12 14:58:48 10000 PDT
- 3 | Sun Mar 12 14:58:48 100000 PDT
- 3 | Sun Mar 12 14:58:48 10000 PDT
- 4 | Sun Mar 12 14:58:48 10000 PDT
- 4 | Sun Mar 12 14:58:48 100000 PDT
+ a |              b               
+---+------------------------------
+ 1 | 1000-03-12 16:39:28-05:19:20
+ 2 | 10000-03-12 16:58:48-05
+ 3 | 100000-03-12 16:58:48-05
+ 3 | 10000-03-12 16:58:48-05
+ 4 | 10000-03-12 16:58:48-05
+ 4 | 100000-03-12 16:58:48-05
 (6 rows)
 
 --Cleanup
@@ -1795,21 +1795,21 @@
 set TimeZone to 'America/New_York';
 -- numeric timezone
 SELECT make_timestamptz(1973, 07, 15, 08, 15, 55.33);
-        make_timestamptz         
----------------------------------
- Sun Jul 15 08:15:55.33 1973 EDT
+     make_timestamptz      
+---------------------------
+ 1973-07-15 08:15:55.33-04
 (1 row)
 
 SELECT make_timestamptz(1973, 07, 15, 08, 15, 55.33, '+2');
-        make_timestamptz         
----------------------------------
- Sun Jul 15 02:15:55.33 1973 EDT
+     make_timestamptz      
+---------------------------
+ 1973-07-15 02:15:55.33-04
 (1 row)
 
 SELECT make_timestamptz(1973, 07, 15, 08, 15, 55.33, '-2');
-        make_timestamptz         
----------------------------------
- Sun Jul 15 06:15:55.33 1973 EDT
+     make_timestamptz      
+---------------------------
+ 1973-07-15 06:15:55.33-04
 (1 row)
 
 WITH tzs (tz) AS (VALUES
@@ -1818,23 +1818,23 @@
     ('+10:00:1'), ('+10:00:01'),
     ('+10:00:10'))
      SELECT make_timestamptz(2010, 2, 27, 3, 45, 00, tz), tz FROM tzs;
-       make_timestamptz       |    tz     
-------------------------------+-----------
- Fri Feb 26 21:45:00 2010 EST | +1
- Fri Feb 26 21:45:00 2010 EST | +1:
- Fri Feb 26 21:45:00 2010 EST | +1:0
- Fri Feb 26 21:45:00 2010 EST | +100
- Fri Feb 26 21:45:00 2010 EST | +1:00
- Fri Feb 26 21:45:00 2010 EST | +01:00
- Fri Feb 26 12:45:00 2010 EST | +10
- Fri Feb 26 12:45:00 2010 EST | +1000
- Fri Feb 26 12:45:00 2010 EST | +10:
- Fri Feb 26 12:45:00 2010 EST | +10:0
- Fri Feb 26 12:45:00 2010 EST | +10:00
- Fri Feb 26 12:45:00 2010 EST | +10:00:
- Fri Feb 26 12:44:59 2010 EST | +10:00:1
- Fri Feb 26 12:44:59 2010 EST | +10:00:01
- Fri Feb 26 12:44:50 2010 EST | +10:00:10
+    make_timestamptz    |    tz     
+------------------------+-----------
+ 2010-02-26 21:45:00-05 | +1
+ 2010-02-26 21:45:00-05 | +1:
+ 2010-02-26 21:45:00-05 | +1:0
+ 2010-02-26 21:45:00-05 | +100
+ 2010-02-26 21:45:00-05 | +1:00
+ 2010-02-26 21:45:00-05 | +01:00
+ 2010-02-26 12:45:00-05 | +10
+ 2010-02-26 12:45:00-05 | +1000
+ 2010-02-26 12:45:00-05 | +10:
+ 2010-02-26 12:45:00-05 | +10:0
+ 2010-02-26 12:45:00-05 | +10:00
+ 2010-02-26 12:45:00-05 | +10:00:
+ 2010-02-26 12:44:59-05 | +10:00:1
+ 2010-02-26 12:44:59-05 | +10:00:01
+ 2010-02-26 12:44:50-05 | +10:00:10
 (15 rows)
 
 -- these should fail
@@ -1860,42 +1860,42 @@
 (1 row)
 
 SELECT make_timestamptz(2014, 12, 10, 0, 0, 0, 'Europe/Prague') AT TIME ZONE 'UTC';
-         timezone         
---------------------------
- Tue Dec 09 23:00:00 2014
+      timezone       
+---------------------
+ 2014-12-09 23:00:00
 (1 row)
 
 SELECT make_timestamptz(1846, 12, 10, 0, 0, 0, 'Asia/Manila') AT TIME ZONE 'UTC';
-         timezone         
---------------------------
- Wed Dec 09 15:56:00 1846
+      timezone       
+---------------------
+ 1846-12-09 15:56:00
 (1 row)
 
 SELECT make_timestamptz(1881, 12, 10, 0, 0, 0, 'Europe/Paris') AT TIME ZONE 'UTC';
-         timezone         
---------------------------
- Fri Dec 09 23:50:39 1881
+      timezone       
+---------------------
+ 1881-12-09 23:50:39
 (1 row)
 
 SELECT make_timestamptz(1910, 12, 24, 0, 0, 0, 'Nehwon/Lankhmar');
 ERROR:  time zone "Nehwon/Lankhmar" not recognized
 -- abbreviations
 SELECT make_timestamptz(2008, 12, 10, 10, 10, 10, 'EST');
-       make_timestamptz       
-------------------------------
- Wed Dec 10 10:10:10 2008 EST
+    make_timestamptz    
+------------------------
+ 2008-12-10 10:10:10-05
 (1 row)
 
 SELECT make_timestamptz(2008, 12, 10, 10, 10, 10, 'EDT');
-       make_timestamptz       
-------------------------------
- Wed Dec 10 09:10:10 2008 EST
+    make_timestamptz    
+------------------------
+ 2008-12-10 09:10:10-05
 (1 row)
 
 SELECT make_timestamptz(2014, 12, 10, 10, 10, 10, 'PST8PDT');
-       make_timestamptz       
-------------------------------
- Wed Dec 10 13:10:10 2014 EST
+    make_timestamptz    
+------------------------
+ 2014-12-10 13:10:10-05
 (1 row)
 
 RESET TimeZone;
@@ -1906,376 +1906,376 @@
 --
 SET TimeZone to 'UTC';
 SELECT '2011-03-27 00:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2011-03-27 00:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+      timestamptz       
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00 Europe/Moscow'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00 MSK'::timestamptz;
-         timestamptz          
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+      timestamptz       
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 00:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2011-03-27 00:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 21:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 21:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 01:59:59'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 02:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:00+00
 (1 row)
 
 SELECT '2011-03-27 02:00:01'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:00:01+00
 (1 row)
 
 SELECT '2011-03-27 02:59:59'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 22:59:59 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 22:59:59+00
 (1 row)
 
 SELECT '2011-03-27 03:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:00+00
 (1 row)
 
 SELECT '2011-03-27 03:00:01'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Mar 26 23:00:01 2011 UTC
+        timezone        
+------------------------
+ 2011-03-26 23:00:01+00
 (1 row)
 
 SELECT '2011-03-27 04:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sun Mar 27 00:00:00 2011 UTC
+        timezone        
+------------------------
+ 2011-03-27 00:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00'::timestamp AT TIME ZONE 'Europe/Moscow';
-           timezone           
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT '2014-10-26 00:59:59'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 20:59:59 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 20:59:59+00
 (1 row)
 
 SELECT '2014-10-26 01:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT '2014-10-26 01:00:01'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 22:00:01 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 22:00:01+00
 (1 row)
 
 SELECT '2014-10-26 02:00:00'::timestamp AT TIME ZONE 'MSK';
-           timezone           
-------------------------------
- Sat Oct 25 23:00:00 2014 UTC
+        timezone        
+------------------------
+ 2014-10-25 23:00:00+00
 (1 row)
 
 SELECT make_timestamptz(2014, 10, 26, 0, 0, 0, 'MSK');
-       make_timestamptz       
-------------------------------
- Sat Oct 25 20:00:00 2014 UTC
+    make_timestamptz    
+------------------------
+ 2014-10-25 20:00:00+00
 (1 row)
 
 SELECT make_timestamptz(2014, 10, 26, 1, 0, 0, 'MSK');
-       make_timestamptz       
-------------------------------
- Sat Oct 25 22:00:00 2014 UTC
+    make_timestamptz    
+------------------------
+ 2014-10-25 22:00:00+00
 (1 row)
 
 SELECT to_timestamp(         0);          -- 1970-01-01 00:00:00+00
-         to_timestamp         
-------------------------------
- Thu Jan 01 00:00:00 1970 UTC
+      to_timestamp      
+------------------------
+ 1970-01-01 00:00:00+00
 (1 row)
 
 SELECT to_timestamp( 946684800);          -- 2000-01-01 00:00:00+00
-         to_timestamp         
-------------------------------
- Sat Jan 01 00:00:00 2000 UTC
+      to_timestamp      
+------------------------
+ 2000-01-01 00:00:00+00
 (1 row)
 
 SELECT to_timestamp(1262349296.7890123);  -- 2010-01-01 12:34:56.789012+00
-            to_timestamp             
--------------------------------------
- Fri Jan 01 12:34:56.789012 2010 UTC
+         to_timestamp          
+-------------------------------
+ 2010-01-01 12:34:56.789012+00
 (1 row)
 
 -- edge cases
 SELECT to_timestamp(-210866803200);       --   4714-11-24 00:00:00+00 BC
-          to_timestamp           
----------------------------------
- Mon Nov 24 00:00:00 4714 UTC BC
+       to_timestamp        
+---------------------------
+ 4714-11-24 00:00:00+00 BC
 (1 row)
 
 -- upper limit varies between integer and float timestamps, so hard to test
@@ -2296,220 +2296,220 @@
 ERROR:  timestamp cannot be NaN
 SET TimeZone to 'Europe/Moscow';
 SELECT '2011-03-26 21:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 00:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 00:00:00+03
 (1 row)
 
 SELECT '2011-03-26 22:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 01:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 01:00:00+03
 (1 row)
 
 SELECT '2011-03-26 22:59:59 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 01:59:59 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 01:59:59+03
 (1 row)
 
 SELECT '2011-03-26 23:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 03:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 03:00:00+04
 (1 row)
 
 SELECT '2011-03-26 23:00:01 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 03:00:01 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 03:00:01+04
 (1 row)
 
 SELECT '2011-03-26 23:59:59 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 03:59:59 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 03:59:59+04
 (1 row)
 
 SELECT '2011-03-27 00:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Mar 27 04:00:00 2011 MSK
+      timestamptz       
+------------------------
+ 2011-03-27 04:00:00+04
 (1 row)
 
 SELECT '2014-10-25 21:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:00:00 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:00:00+04
 (1 row)
 
 SELECT '2014-10-25 21:59:59 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:59:59 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:59:59+04
 (1 row)
 
 SELECT '2014-10-25 22:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:00:00 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:00:00+03
 (1 row)
 
 SELECT '2014-10-25 22:00:01 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 01:00:01 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 01:00:01+03
 (1 row)
 
 SELECT '2014-10-25 23:00:00 UTC'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Oct 26 02:00:00 2014 MSK
+      timestamptz       
+------------------------
+ 2014-10-26 02:00:00+03
 (1 row)
 
 RESET TimeZone;
 SELECT '2011-03-26 21:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 00:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 00:00:00
 (1 row)
 
 SELECT '2011-03-26 22:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 01:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 01:00:00
 (1 row)
 
 SELECT '2011-03-26 22:59:59 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 01:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 01:59:59
 (1 row)
 
 SELECT '2011-03-26 23:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 03:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:00
 (1 row)
 
 SELECT '2011-03-26 23:00:01 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 03:00:01 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:01
 (1 row)
 
 SELECT '2011-03-26 23:59:59 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 03:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 03:59:59
 (1 row)
 
 SELECT '2011-03-27 00:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Mar 27 04:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 04:00:00
 (1 row)
 
 SELECT '2014-10-25 21:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 21:59:59 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:59:59 2014
+      timezone       
+---------------------
+ 2014-10-26 01:59:59
 (1 row)
 
 SELECT '2014-10-25 22:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 22:00:01 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 01:00:01 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:01
 (1 row)
 
 SELECT '2014-10-25 23:00:00 UTC'::timestamptz AT TIME ZONE 'Europe/Moscow';
-         timezone         
---------------------------
- Sun Oct 26 02:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 02:00:00
 (1 row)
 
 SELECT '2011-03-26 21:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 00:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 00:00:00
 (1 row)
 
 SELECT '2011-03-26 22:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 01:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 01:00:00
 (1 row)
 
 SELECT '2011-03-26 22:59:59 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 01:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 01:59:59
 (1 row)
 
 SELECT '2011-03-26 23:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 03:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:00
 (1 row)
 
 SELECT '2011-03-26 23:00:01 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 03:00:01 2011
+      timezone       
+---------------------
+ 2011-03-27 03:00:01
 (1 row)
 
 SELECT '2011-03-26 23:59:59 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 03:59:59 2011
+      timezone       
+---------------------
+ 2011-03-27 03:59:59
 (1 row)
 
 SELECT '2011-03-27 00:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Mar 27 04:00:00 2011
+      timezone       
+---------------------
+ 2011-03-27 04:00:00
 (1 row)
 
 SELECT '2014-10-25 21:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 21:59:59 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:59:59 2014
+      timezone       
+---------------------
+ 2014-10-26 01:59:59
 (1 row)
 
 SELECT '2014-10-25 22:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:00
 (1 row)
 
 SELECT '2014-10-25 22:00:01 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 01:00:01 2014
+      timezone       
+---------------------
+ 2014-10-26 01:00:01
 (1 row)
 
 SELECT '2014-10-25 23:00:00 UTC'::timestamptz AT TIME ZONE 'MSK';
-         timezone         
---------------------------
- Sun Oct 26 02:00:00 2014
+      timezone       
+---------------------
+ 2014-10-26 02:00:00
 (1 row)
 
 --
@@ -2519,15 +2519,15 @@
 insert into tmptz values ('2017-01-18 00:00+00');
 explain (costs off)
 select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
-                                           QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
+                                         QUERY PLAN                                         
+--------------------------------------------------------------------------------------------
  Seq Scan on tmptz
-   Filter: (timezone('utc'::text, f1) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
+   Filter: (timezone('utc'::text, f1) = '2017-01-18 00:00:00'::timestamp without time zone)
 (2 rows)
 
 select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
-              f1              
-------------------------------
- Tue Jan 17 16:00:00 2017 PST
+           f1           
+------------------------
+ 2017-01-17 19:00:00-05
 (1 row)
 
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/horology.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/horology.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/horology.out	2019-08-12 14:55:05.430230622 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/horology.out	2019-09-05 16:22:59.767778230 -0500
@@ -8,73 +8,73 @@
 SELECT timestamp with time zone '20011227 040506+08';
          timestamptz          
 ------------------------------
- Wed Dec 26 12:05:06 2001 PST
+ Wed Dec 26 15:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227 040506-08';
          timestamptz          
 ------------------------------
- Thu Dec 27 04:05:06 2001 PST
+ Thu Dec 27 07:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227 040506.789+08';
            timestamptz            
 ----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+ Wed Dec 26 15:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227 040506.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506+08';
          timestamptz          
 ------------------------------
- Wed Dec 26 12:05:06 2001 PST
+ Wed Dec 26 15:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506-08';
          timestamptz          
 ------------------------------
- Thu Dec 27 04:05:06 2001 PST
+ Thu Dec 27 07:05:06 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506.789+08';
            timestamptz            
 ----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+ Wed Dec 26 15:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '20011227T040506.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '2001-12-27 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '2001.12.27 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '2001/12/27 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 SELECT timestamp with time zone '12/27/2001 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+ Thu Dec 27 07:05:06.789 2001 -05
 (1 row)
 
 -- should fail in mdy mode:
@@ -87,118 +87,116 @@
 SELECT timestamp with time zone '27/12/2001 04:05:06.789-08';
            timestamptz            
 ----------------------------------
- Thu 27 Dec 04:05:06.789 2001 PST
+ Thu 27 Dec 07:05:06.789 2001 -05
 (1 row)
 
 reset datestyle;
 SELECT timestamp with time zone 'Y2001M12D27H04M05S06.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-26 15:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'Y2001M12D27H04M05S06.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-27 07:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'Y2001M12D27H04MM05S06.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-26 15:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'Y2001M12D27H04MM05S06.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-27 07:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 08:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 11:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 00:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 03:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271.5+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 20:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 23:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271.5-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 12:00:00 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 15:00:00-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271 04:05:06+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 12:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 15:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271 04:05:06-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 04:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 07:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506+08';
-         timestamptz          
-------------------------------
- Wed Dec 26 12:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-26 15:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506-08';
-         timestamptz          
-------------------------------
- Thu Dec 27 04:05:06 2001 PST
+      timestamptz       
+------------------------
+ 2001-12-27 07:05:06-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-26 15:05:06.789-05
 (1 row)
 
 SELECT timestamp with time zone 'J2452271T040506.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
+        timestamptz         
+----------------------------
+ 2001-12-27 07:05:06.789-05
 (1 row)
 
 -- German/European-style dates with periods as delimiters
 SELECT timestamp with time zone '12.27.2001 04:05:06.789+08';
-           timestamptz            
-----------------------------------
- Wed Dec 26 12:05:06.789 2001 PST
-(1 row)
-
+ERROR:  date/time field value out of range: "12.27.2001 04:05:06.789+08"
+LINE 1: SELECT timestamp with time zone '12.27.2001 04:05:06.789+08'...
+                                        ^
+HINT:  Perhaps you need a different "datestyle" setting.
 SELECT timestamp with time zone '12.27.2001 04:05:06.789-08';
-           timestamptz            
-----------------------------------
- Thu Dec 27 04:05:06.789 2001 PST
-(1 row)
-
+ERROR:  date/time field value out of range: "12.27.2001 04:05:06.789-08"
+LINE 1: SELECT timestamp with time zone '12.27.2001 04:05:06.789-08'...
+                                        ^
+HINT:  Perhaps you need a different "datestyle" setting.
 SET DateStyle = 'German';
 SELECT timestamp with time zone '27.12.2001 04:05:06.789+08';
          timestamptz         
 -----------------------------
- 26.12.2001 12:05:06.789 PST
+ 26.12.2001 15:05:06.789 -05
 (1 row)
 
 SELECT timestamp with time zone '27.12.2001 04:05:06.789-08';
          timestamptz         
 -----------------------------
- 27.12.2001 04:05:06.789 PST
+ 27.12.2001 07:05:06.789 -05
 (1 row)
 
 SET DateStyle = 'ISO';
@@ -289,13 +287,13 @@
 SELECT date '1991-02-03' + time with time zone '04:05:06 PST' AS "Date + Time PST";
        Date + Time PST        
 ------------------------------
- Sun Feb 03 04:05:06 1991 PST
+ Sun Feb 03 07:05:06 1991 -05
 (1 row)
 
 SELECT date '2001-02-03' + time with time zone '04:05:06 UTC' AS "Date + Time UTC";
        Date + Time UTC        
 ------------------------------
- Fri Feb 02 20:05:06 2001 PST
+ Fri Feb 02 23:05:06 2001 -05
 (1 row)
 
 SELECT date '1991-02-03' + interval '2 years' AS "Add Two Years";
@@ -368,9 +366,9 @@
 (1 row)
 
 SELECT timestamp without time zone '12/31/294276' - timestamp without time zone '12/23/1999' AS "106751991 Days";
-  106751991 Days  
-------------------
- @ 106751991 days
+ 106751991 Days 
+----------------
+ 106751991 days
 (1 row)
 
 -- Shorthand values
@@ -454,13 +452,13 @@
 SELECT date '1994-01-01' + timetz '11:00-5' AS "Jan_01_1994_8am";
        Jan_01_1994_8am        
 ------------------------------
- Sat Jan 01 08:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '11:00-5') AS "Jan_01_1994_8am";
        Jan_01_1994_8am        
 ------------------------------
- Sat Jan 01 08:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT '' AS "64", d1 + interval '1 year' AS one_year FROM TIMESTAMP_TBL;
@@ -494,7 +492,7 @@
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
-    | Tue Feb 10 17:32:01 1998
+    | Fri Oct 02 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
     | Tue Feb 10 17:32:01 1998
@@ -564,7 +562,7 @@
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
-    | Sat Feb 10 17:32:01 1996
+    | Wed Oct 02 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
     | Sat Feb 10 17:32:01 1996
@@ -606,25 +604,25 @@
 SELECT timestamp with time zone '1996-03-01' - interval '1 second' AS "Feb 29";
             Feb 29            
 ------------------------------
- Thu Feb 29 23:59:59 1996 PST
+ Thu Feb 29 23:59:59 1996 -05
 (1 row)
 
 SELECT timestamp with time zone '1999-03-01' - interval '1 second' AS "Feb 28";
             Feb 28            
 ------------------------------
- Sun Feb 28 23:59:59 1999 PST
+ Sun Feb 28 23:59:59 1999 -05
 (1 row)
 
 SELECT timestamp with time zone '2000-03-01' - interval '1 second' AS "Feb 29";
             Feb 29            
 ------------------------------
- Tue Feb 29 23:59:59 2000 PST
+ Tue Feb 29 23:59:59 2000 -05
 (1 row)
 
 SELECT timestamp with time zone '1999-12-01' + interval '1 month - 1 second' AS "Dec 31";
             Dec 31            
 ------------------------------
- Fri Dec 31 23:59:59 1999 PST
+ Fri Dec 31 23:59:59 1999 -05
 (1 row)
 
 SELECT (timestamp with time zone 'today' = (timestamp with time zone 'yesterday' + interval '1 day')) as "True";
@@ -681,31 +679,31 @@
 SELECT timestamptz(date '1994-01-01', time '11:00') AS "Jan_01_1994_10am";
        Jan_01_1994_10am       
 ------------------------------
- Sat Jan 01 11:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time '10:00') AS "Jan_01_1994_9am";
        Jan_01_1994_9am        
 ------------------------------
- Sat Jan 01 10:00:00 1994 PST
+ Sat Jan 01 10:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '11:00-8') AS "Jan_01_1994_11am";
        Jan_01_1994_11am       
 ------------------------------
- Sat Jan 01 11:00:00 1994 PST
+ Sat Jan 01 14:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '10:00-8') AS "Jan_01_1994_10am";
        Jan_01_1994_10am       
 ------------------------------
- Sat Jan 01 10:00:00 1994 PST
+ Sat Jan 01 13:00:00 1994 -05
 (1 row)
 
 SELECT timestamptz(date '1994-01-01', time with time zone '11:00-5') AS "Jan_01_1994_8am";
        Jan_01_1994_8am        
 ------------------------------
- Sat Jan 01 08:00:00 1994 PST
+ Sat Jan 01 11:00:00 1994 -05
 (1 row)
 
 SELECT '' AS "64", d1 + interval '1 year' AS one_year FROM TIMESTAMPTZ_TBL;
@@ -713,70 +711,70 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Thu Dec 31 16:00:00 1970 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:02 1998 PST
-    | Tue Feb 10 17:32:01.4 1998 PST
-    | Tue Feb 10 17:32:01.5 1998 PST
-    | Tue Feb 10 17:32:01.6 1998 PST
-    | Fri Jan 02 00:00:00 1998 PST
-    | Fri Jan 02 03:04:05 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Wed Jun 10 17:32:01 1998 PDT
-    | Sun Sep 22 18:19:20 2002 PDT
-    | Thu Mar 15 08:14:01 2001 PST
-    | Thu Mar 15 04:14:02 2001 PST
-    | Thu Mar 15 02:14:03 2001 PST
-    | Thu Mar 15 03:14:04 2001 PST
-    | Thu Mar 15 01:14:05 2001 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:00 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 17:32:01 1998 PST
-    | Tue Feb 10 09:32:01 1998 PST
-    | Tue Feb 10 09:32:01 1998 PST
-    | Tue Feb 10 09:32:01 1998 PST
-    | Tue Feb 10 14:32:01 1998 PST
-    | Fri Jul 10 14:32:01 1998 PDT
-    | Wed Jun 10 18:32:01 1998 PDT
-    | Tue Feb 10 17:32:01 1998 PST
-    | Wed Feb 11 17:32:01 1998 PST
-    | Thu Feb 12 17:32:01 1998 PST
-    | Fri Feb 13 17:32:01 1998 PST
-    | Sat Feb 14 17:32:01 1998 PST
-    | Sun Feb 15 17:32:01 1998 PST
-    | Mon Feb 16 17:32:01 1998 PST
-    | Thu Feb 16 17:32:01 0096 PST BC
-    | Sun Feb 16 17:32:01 0098 PST
-    | Fri Feb 16 17:32:01 0598 PST
-    | Wed Feb 16 17:32:01 1098 PST
-    | Sun Feb 16 17:32:01 1698 PST
-    | Fri Feb 16 17:32:01 1798 PST
-    | Wed Feb 16 17:32:01 1898 PST
-    | Mon Feb 16 17:32:01 1998 PST
-    | Sun Feb 16 17:32:01 2098 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Fri Feb 28 17:32:01 1997 PST
-    | Sat Mar 01 17:32:01 1997 PST
-    | Tue Dec 30 17:32:01 1997 PST
-    | Wed Dec 31 17:32:01 1997 PST
-    | Thu Jan 01 17:32:01 1998 PST
-    | Sat Feb 28 17:32:01 1998 PST
-    | Sun Mar 01 17:32:01 1998 PST
-    | Wed Dec 30 17:32:01 1998 PST
-    | Thu Dec 31 17:32:01 1998 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
-    | Mon Dec 31 17:32:01 2001 PST
-    | Tue Jan 01 17:32:01 2002 PST
+    | Thu Dec 31 19:00:00 1970 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:02 1998 -05
+    | Tue Feb 10 20:32:01.4 1998 -05
+    | Tue Feb 10 20:32:01.5 1998 -05
+    | Tue Feb 10 20:32:01.6 1998 -05
+    | Fri Jan 02 00:00:00 1998 -05
+    | Fri Jan 02 03:04:05 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Wed Jun 10 19:32:01 1998 -05
+    | Sun Sep 22 18:19:20 2002 -05
+    | Thu Mar 15 11:14:01 2001 -05
+    | Thu Mar 15 07:14:02 2001 -05
+    | Thu Mar 15 05:14:03 2001 -05
+    | Thu Mar 15 06:14:04 2001 -05
+    | Thu Mar 15 04:14:05 2001 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 17:32:01 1998 -05
+    | Tue Feb 10 17:32:00 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Fri Oct 02 20:32:01 1998 -05
+    | Tue Feb 10 20:32:01 1998 -05
+    | Tue Feb 10 12:32:01 1998 -05
+    | Tue Feb 10 12:32:01 1998 -05
+    | Tue Feb 10 12:32:01 1998 -05
+    | Tue Feb 10 17:32:01 1998 -05
+    | Fri Jul 10 16:32:01 1998 -05
+    | Wed Jun 10 20:32:01 1998 -05
+    | Tue Feb 10 17:32:01 1998 -05
+    | Wed Feb 11 17:32:01 1998 -05
+    | Thu Feb 12 17:32:01 1998 -05
+    | Fri Feb 13 17:32:01 1998 -05
+    | Sat Feb 14 17:32:01 1998 -05
+    | Sun Feb 15 17:32:01 1998 -05
+    | Mon Feb 16 17:32:01 1998 -05
+    | Thu Feb 16 17:32:01 0096 LMT BC
+    | Sun Feb 16 17:32:01 0098 LMT
+    | Fri Feb 16 17:32:01 0598 LMT
+    | Wed Feb 16 17:32:01 1098 LMT
+    | Sun Feb 16 17:32:01 1698 LMT
+    | Fri Feb 16 17:32:01 1798 LMT
+    | Wed Feb 16 17:32:01 1898 QMT
+    | Mon Feb 16 17:32:01 1998 -05
+    | Sun Feb 16 17:32:01 2098 -05
+    | Fri Feb 28 17:32:01 1997 -05
+    | Fri Feb 28 17:32:01 1997 -05
+    | Sat Mar 01 17:32:01 1997 -05
+    | Tue Dec 30 17:32:01 1997 -05
+    | Wed Dec 31 17:32:01 1997 -05
+    | Thu Jan 01 17:32:01 1998 -05
+    | Sat Feb 28 17:32:01 1998 -05
+    | Sun Mar 01 17:32:01 1998 -05
+    | Wed Dec 30 17:32:01 1998 -05
+    | Thu Dec 31 17:32:01 1998 -05
+    | Sun Dec 31 17:32:01 2000 -05
+    | Mon Jan 01 17:32:01 2001 -05
+    | Mon Dec 31 17:32:01 2001 -05
+    | Tue Jan 01 17:32:01 2002 -05
 (66 rows)
 
 SELECT '' AS "64", d1 - interval '1 year' AS one_year FROM TIMESTAMPTZ_TBL;
@@ -784,79 +782,79 @@
 ----+---------------------------------
     | -infinity
     | infinity
-    | Tue Dec 31 16:00:00 1968 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:02 1996 PST
-    | Sat Feb 10 17:32:01.4 1996 PST
-    | Sat Feb 10 17:32:01.5 1996 PST
-    | Sat Feb 10 17:32:01.6 1996 PST
-    | Tue Jan 02 00:00:00 1996 PST
-    | Tue Jan 02 03:04:05 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Mon Jun 10 17:32:01 1996 PDT
-    | Fri Sep 22 18:19:20 2000 PDT
-    | Mon Mar 15 08:14:01 1999 PST
-    | Mon Mar 15 04:14:02 1999 PST
-    | Mon Mar 15 02:14:03 1999 PST
-    | Mon Mar 15 03:14:04 1999 PST
-    | Mon Mar 15 01:14:05 1999 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:00 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sat Feb 10 09:32:01 1996 PST
-    | Sat Feb 10 09:32:01 1996 PST
-    | Sat Feb 10 09:32:01 1996 PST
-    | Sat Feb 10 14:32:01 1996 PST
-    | Wed Jul 10 14:32:01 1996 PDT
-    | Mon Jun 10 18:32:01 1996 PDT
-    | Sat Feb 10 17:32:01 1996 PST
-    | Sun Feb 11 17:32:01 1996 PST
-    | Mon Feb 12 17:32:01 1996 PST
-    | Tue Feb 13 17:32:01 1996 PST
-    | Wed Feb 14 17:32:01 1996 PST
-    | Thu Feb 15 17:32:01 1996 PST
-    | Fri Feb 16 17:32:01 1996 PST
-    | Mon Feb 16 17:32:01 0098 PST BC
-    | Thu Feb 16 17:32:01 0096 PST
-    | Tue Feb 16 17:32:01 0596 PST
-    | Sun Feb 16 17:32:01 1096 PST
-    | Thu Feb 16 17:32:01 1696 PST
-    | Tue Feb 16 17:32:01 1796 PST
-    | Sun Feb 16 17:32:01 1896 PST
-    | Fri Feb 16 17:32:01 1996 PST
-    | Thu Feb 16 17:32:01 2096 PST
-    | Tue Feb 28 17:32:01 1995 PST
-    | Tue Feb 28 17:32:01 1995 PST
-    | Wed Mar 01 17:32:01 1995 PST
-    | Sat Dec 30 17:32:01 1995 PST
-    | Sun Dec 31 17:32:01 1995 PST
-    | Mon Jan 01 17:32:01 1996 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Thu Dec 31 17:32:01 1998 PST
-    | Fri Jan 01 17:32:01 1999 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
+    | Tue Dec 31 19:00:00 1968 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:02 1996 -05
+    | Sat Feb 10 20:32:01.4 1996 -05
+    | Sat Feb 10 20:32:01.5 1996 -05
+    | Sat Feb 10 20:32:01.6 1996 -05
+    | Tue Jan 02 00:00:00 1996 -05
+    | Tue Jan 02 03:04:05 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Mon Jun 10 19:32:01 1996 -05
+    | Fri Sep 22 18:19:20 2000 -05
+    | Mon Mar 15 11:14:01 1999 -05
+    | Mon Mar 15 07:14:02 1999 -05
+    | Mon Mar 15 05:14:03 1999 -05
+    | Mon Mar 15 06:14:04 1999 -05
+    | Mon Mar 15 04:14:05 1999 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 17:32:01 1996 -05
+    | Sat Feb 10 17:32:00 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Wed Oct 02 20:32:01 1996 -05
+    | Sat Feb 10 20:32:01 1996 -05
+    | Sat Feb 10 12:32:01 1996 -05
+    | Sat Feb 10 12:32:01 1996 -05
+    | Sat Feb 10 12:32:01 1996 -05
+    | Sat Feb 10 17:32:01 1996 -05
+    | Wed Jul 10 16:32:01 1996 -05
+    | Mon Jun 10 20:32:01 1996 -05
+    | Sat Feb 10 17:32:01 1996 -05
+    | Sun Feb 11 17:32:01 1996 -05
+    | Mon Feb 12 17:32:01 1996 -05
+    | Tue Feb 13 17:32:01 1996 -05
+    | Wed Feb 14 17:32:01 1996 -05
+    | Thu Feb 15 17:32:01 1996 -05
+    | Fri Feb 16 17:32:01 1996 -05
+    | Mon Feb 16 17:32:01 0098 LMT BC
+    | Thu Feb 16 17:32:01 0096 LMT
+    | Tue Feb 16 17:32:01 0596 LMT
+    | Sun Feb 16 17:32:01 1096 LMT
+    | Thu Feb 16 17:32:01 1696 LMT
+    | Tue Feb 16 17:32:01 1796 LMT
+    | Sun Feb 16 17:32:01 1896 QMT
+    | Fri Feb 16 17:32:01 1996 -05
+    | Thu Feb 16 17:32:01 2096 -05
+    | Tue Feb 28 17:32:01 1995 -05
+    | Tue Feb 28 17:32:01 1995 -05
+    | Wed Mar 01 17:32:01 1995 -05
+    | Sat Dec 30 17:32:01 1995 -05
+    | Sun Dec 31 17:32:01 1995 -05
+    | Mon Jan 01 17:32:01 1996 -05
+    | Wed Feb 28 17:32:01 1996 -05
+    | Fri Mar 01 17:32:01 1996 -05
+    | Mon Dec 30 17:32:01 1996 -05
+    | Tue Dec 31 17:32:01 1996 -05
+    | Thu Dec 31 17:32:01 1998 -05
+    | Fri Jan 01 17:32:01 1999 -05
+    | Fri Dec 31 17:32:01 1999 -05
+    | Sat Jan 01 17:32:01 2000 -05
 (66 rows)
 
 --
 -- time, interval arithmetic
 --
 SELECT CAST(time '01:02' AS interval) AS "+01:02";
-     +01:02      
------------------
- @ 1 hour 2 mins
+  +01:02  
+----------
+ 01:02:00
 (1 row)
 
 SELECT CAST(interval '02:03' AS time) AS "02:03:00";
@@ -933,346 +931,346 @@
   WHERE t.d1 BETWEEN '1990-01-01' AND '2001-01-01'
     AND i.f1 BETWEEN '00:00' AND '23:00'
   ORDER BY 1,2;
-             t              |     i     |            add             |          subtract          
-----------------------------+-----------+----------------------------+----------------------------
- Wed Feb 28 17:32:01 1996   | @ 1 min   | Wed Feb 28 17:33:01 1996   | Wed Feb 28 17:31:01 1996
- Wed Feb 28 17:32:01 1996   | @ 5 hours | Wed Feb 28 22:32:01 1996   | Wed Feb 28 12:32:01 1996
- Thu Feb 29 17:32:01 1996   | @ 1 min   | Thu Feb 29 17:33:01 1996   | Thu Feb 29 17:31:01 1996
- Thu Feb 29 17:32:01 1996   | @ 5 hours | Thu Feb 29 22:32:01 1996   | Thu Feb 29 12:32:01 1996
- Fri Mar 01 17:32:01 1996   | @ 1 min   | Fri Mar 01 17:33:01 1996   | Fri Mar 01 17:31:01 1996
- Fri Mar 01 17:32:01 1996   | @ 5 hours | Fri Mar 01 22:32:01 1996   | Fri Mar 01 12:32:01 1996
- Mon Dec 30 17:32:01 1996   | @ 1 min   | Mon Dec 30 17:33:01 1996   | Mon Dec 30 17:31:01 1996
- Mon Dec 30 17:32:01 1996   | @ 5 hours | Mon Dec 30 22:32:01 1996   | Mon Dec 30 12:32:01 1996
- Tue Dec 31 17:32:01 1996   | @ 1 min   | Tue Dec 31 17:33:01 1996   | Tue Dec 31 17:31:01 1996
- Tue Dec 31 17:32:01 1996   | @ 5 hours | Tue Dec 31 22:32:01 1996   | Tue Dec 31 12:32:01 1996
- Wed Jan 01 17:32:01 1997   | @ 1 min   | Wed Jan 01 17:33:01 1997   | Wed Jan 01 17:31:01 1997
- Wed Jan 01 17:32:01 1997   | @ 5 hours | Wed Jan 01 22:32:01 1997   | Wed Jan 01 12:32:01 1997
- Thu Jan 02 00:00:00 1997   | @ 1 min   | Thu Jan 02 00:01:00 1997   | Wed Jan 01 23:59:00 1997
- Thu Jan 02 00:00:00 1997   | @ 5 hours | Thu Jan 02 05:00:00 1997   | Wed Jan 01 19:00:00 1997
- Thu Jan 02 03:04:05 1997   | @ 1 min   | Thu Jan 02 03:05:05 1997   | Thu Jan 02 03:03:05 1997
- Thu Jan 02 03:04:05 1997   | @ 5 hours | Thu Jan 02 08:04:05 1997   | Wed Jan 01 22:04:05 1997
- Mon Feb 10 17:32:00 1997   | @ 1 min   | Mon Feb 10 17:33:00 1997   | Mon Feb 10 17:31:00 1997
- Mon Feb 10 17:32:00 1997   | @ 5 hours | Mon Feb 10 22:32:00 1997   | Mon Feb 10 12:32:00 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 1 min   | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01 1997   | @ 5 hours | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
- Mon Feb 10 17:32:01.4 1997 | @ 1 min   | Mon Feb 10 17:33:01.4 1997 | Mon Feb 10 17:31:01.4 1997
- Mon Feb 10 17:32:01.4 1997 | @ 5 hours | Mon Feb 10 22:32:01.4 1997 | Mon Feb 10 12:32:01.4 1997
- Mon Feb 10 17:32:01.5 1997 | @ 1 min   | Mon Feb 10 17:33:01.5 1997 | Mon Feb 10 17:31:01.5 1997
- Mon Feb 10 17:32:01.5 1997 | @ 5 hours | Mon Feb 10 22:32:01.5 1997 | Mon Feb 10 12:32:01.5 1997
- Mon Feb 10 17:32:01.6 1997 | @ 1 min   | Mon Feb 10 17:33:01.6 1997 | Mon Feb 10 17:31:01.6 1997
- Mon Feb 10 17:32:01.6 1997 | @ 5 hours | Mon Feb 10 22:32:01.6 1997 | Mon Feb 10 12:32:01.6 1997
- Mon Feb 10 17:32:02 1997   | @ 1 min   | Mon Feb 10 17:33:02 1997   | Mon Feb 10 17:31:02 1997
- Mon Feb 10 17:32:02 1997   | @ 5 hours | Mon Feb 10 22:32:02 1997   | Mon Feb 10 12:32:02 1997
- Tue Feb 11 17:32:01 1997   | @ 1 min   | Tue Feb 11 17:33:01 1997   | Tue Feb 11 17:31:01 1997
- Tue Feb 11 17:32:01 1997   | @ 5 hours | Tue Feb 11 22:32:01 1997   | Tue Feb 11 12:32:01 1997
- Wed Feb 12 17:32:01 1997   | @ 1 min   | Wed Feb 12 17:33:01 1997   | Wed Feb 12 17:31:01 1997
- Wed Feb 12 17:32:01 1997   | @ 5 hours | Wed Feb 12 22:32:01 1997   | Wed Feb 12 12:32:01 1997
- Thu Feb 13 17:32:01 1997   | @ 1 min   | Thu Feb 13 17:33:01 1997   | Thu Feb 13 17:31:01 1997
- Thu Feb 13 17:32:01 1997   | @ 5 hours | Thu Feb 13 22:32:01 1997   | Thu Feb 13 12:32:01 1997
- Fri Feb 14 17:32:01 1997   | @ 1 min   | Fri Feb 14 17:33:01 1997   | Fri Feb 14 17:31:01 1997
- Fri Feb 14 17:32:01 1997   | @ 5 hours | Fri Feb 14 22:32:01 1997   | Fri Feb 14 12:32:01 1997
- Sat Feb 15 17:32:01 1997   | @ 1 min   | Sat Feb 15 17:33:01 1997   | Sat Feb 15 17:31:01 1997
- Sat Feb 15 17:32:01 1997   | @ 5 hours | Sat Feb 15 22:32:01 1997   | Sat Feb 15 12:32:01 1997
- Sun Feb 16 17:32:01 1997   | @ 1 min   | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
- Sun Feb 16 17:32:01 1997   | @ 1 min   | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
- Sun Feb 16 17:32:01 1997   | @ 5 hours | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
- Sun Feb 16 17:32:01 1997   | @ 5 hours | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
- Fri Feb 28 17:32:01 1997   | @ 1 min   | Fri Feb 28 17:33:01 1997   | Fri Feb 28 17:31:01 1997
- Fri Feb 28 17:32:01 1997   | @ 5 hours | Fri Feb 28 22:32:01 1997   | Fri Feb 28 12:32:01 1997
- Sat Mar 01 17:32:01 1997   | @ 1 min   | Sat Mar 01 17:33:01 1997   | Sat Mar 01 17:31:01 1997
- Sat Mar 01 17:32:01 1997   | @ 5 hours | Sat Mar 01 22:32:01 1997   | Sat Mar 01 12:32:01 1997
- Tue Jun 10 17:32:01 1997   | @ 1 min   | Tue Jun 10 17:33:01 1997   | Tue Jun 10 17:31:01 1997
- Tue Jun 10 17:32:01 1997   | @ 5 hours | Tue Jun 10 22:32:01 1997   | Tue Jun 10 12:32:01 1997
- Tue Jun 10 18:32:01 1997   | @ 1 min   | Tue Jun 10 18:33:01 1997   | Tue Jun 10 18:31:01 1997
- Tue Jun 10 18:32:01 1997   | @ 5 hours | Tue Jun 10 23:32:01 1997   | Tue Jun 10 13:32:01 1997
- Tue Dec 30 17:32:01 1997   | @ 1 min   | Tue Dec 30 17:33:01 1997   | Tue Dec 30 17:31:01 1997
- Tue Dec 30 17:32:01 1997   | @ 5 hours | Tue Dec 30 22:32:01 1997   | Tue Dec 30 12:32:01 1997
- Wed Dec 31 17:32:01 1997   | @ 1 min   | Wed Dec 31 17:33:01 1997   | Wed Dec 31 17:31:01 1997
- Wed Dec 31 17:32:01 1997   | @ 5 hours | Wed Dec 31 22:32:01 1997   | Wed Dec 31 12:32:01 1997
- Fri Dec 31 17:32:01 1999   | @ 1 min   | Fri Dec 31 17:33:01 1999   | Fri Dec 31 17:31:01 1999
- Fri Dec 31 17:32:01 1999   | @ 5 hours | Fri Dec 31 22:32:01 1999   | Fri Dec 31 12:32:01 1999
- Sat Jan 01 17:32:01 2000   | @ 1 min   | Sat Jan 01 17:33:01 2000   | Sat Jan 01 17:31:01 2000
- Sat Jan 01 17:32:01 2000   | @ 5 hours | Sat Jan 01 22:32:01 2000   | Sat Jan 01 12:32:01 2000
- Wed Mar 15 02:14:05 2000   | @ 1 min   | Wed Mar 15 02:15:05 2000   | Wed Mar 15 02:13:05 2000
- Wed Mar 15 02:14:05 2000   | @ 5 hours | Wed Mar 15 07:14:05 2000   | Tue Mar 14 21:14:05 2000
- Wed Mar 15 03:14:04 2000   | @ 1 min   | Wed Mar 15 03:15:04 2000   | Wed Mar 15 03:13:04 2000
- Wed Mar 15 03:14:04 2000   | @ 5 hours | Wed Mar 15 08:14:04 2000   | Tue Mar 14 22:14:04 2000
- Wed Mar 15 08:14:01 2000   | @ 1 min   | Wed Mar 15 08:15:01 2000   | Wed Mar 15 08:13:01 2000
- Wed Mar 15 08:14:01 2000   | @ 5 hours | Wed Mar 15 13:14:01 2000   | Wed Mar 15 03:14:01 2000
- Wed Mar 15 12:14:03 2000   | @ 1 min   | Wed Mar 15 12:15:03 2000   | Wed Mar 15 12:13:03 2000
- Wed Mar 15 12:14:03 2000   | @ 5 hours | Wed Mar 15 17:14:03 2000   | Wed Mar 15 07:14:03 2000
- Wed Mar 15 13:14:02 2000   | @ 1 min   | Wed Mar 15 13:15:02 2000   | Wed Mar 15 13:13:02 2000
- Wed Mar 15 13:14:02 2000   | @ 5 hours | Wed Mar 15 18:14:02 2000   | Wed Mar 15 08:14:02 2000
- Sun Dec 31 17:32:01 2000   | @ 1 min   | Sun Dec 31 17:33:01 2000   | Sun Dec 31 17:31:01 2000
- Sun Dec 31 17:32:01 2000   | @ 5 hours | Sun Dec 31 22:32:01 2000   | Sun Dec 31 12:32:01 2000
+             t              |    i     |            add             |          subtract          
+----------------------------+----------+----------------------------+----------------------------
+ Wed Feb 28 17:32:01 1996   | 00:01:00 | Wed Feb 28 17:33:01 1996   | Wed Feb 28 17:31:01 1996
+ Wed Feb 28 17:32:01 1996   | 05:00:00 | Wed Feb 28 22:32:01 1996   | Wed Feb 28 12:32:01 1996
+ Thu Feb 29 17:32:01 1996   | 00:01:00 | Thu Feb 29 17:33:01 1996   | Thu Feb 29 17:31:01 1996
+ Thu Feb 29 17:32:01 1996   | 05:00:00 | Thu Feb 29 22:32:01 1996   | Thu Feb 29 12:32:01 1996
+ Fri Mar 01 17:32:01 1996   | 00:01:00 | Fri Mar 01 17:33:01 1996   | Fri Mar 01 17:31:01 1996
+ Fri Mar 01 17:32:01 1996   | 05:00:00 | Fri Mar 01 22:32:01 1996   | Fri Mar 01 12:32:01 1996
+ Mon Dec 30 17:32:01 1996   | 00:01:00 | Mon Dec 30 17:33:01 1996   | Mon Dec 30 17:31:01 1996
+ Mon Dec 30 17:32:01 1996   | 05:00:00 | Mon Dec 30 22:32:01 1996   | Mon Dec 30 12:32:01 1996
+ Tue Dec 31 17:32:01 1996   | 00:01:00 | Tue Dec 31 17:33:01 1996   | Tue Dec 31 17:31:01 1996
+ Tue Dec 31 17:32:01 1996   | 05:00:00 | Tue Dec 31 22:32:01 1996   | Tue Dec 31 12:32:01 1996
+ Wed Jan 01 17:32:01 1997   | 00:01:00 | Wed Jan 01 17:33:01 1997   | Wed Jan 01 17:31:01 1997
+ Wed Jan 01 17:32:01 1997   | 05:00:00 | Wed Jan 01 22:32:01 1997   | Wed Jan 01 12:32:01 1997
+ Thu Jan 02 00:00:00 1997   | 00:01:00 | Thu Jan 02 00:01:00 1997   | Wed Jan 01 23:59:00 1997
+ Thu Jan 02 00:00:00 1997   | 05:00:00 | Thu Jan 02 05:00:00 1997   | Wed Jan 01 19:00:00 1997
+ Thu Jan 02 03:04:05 1997   | 00:01:00 | Thu Jan 02 03:05:05 1997   | Thu Jan 02 03:03:05 1997
+ Thu Jan 02 03:04:05 1997   | 05:00:00 | Thu Jan 02 08:04:05 1997   | Wed Jan 01 22:04:05 1997
+ Mon Feb 10 17:32:00 1997   | 00:01:00 | Mon Feb 10 17:33:00 1997   | Mon Feb 10 17:31:00 1997
+ Mon Feb 10 17:32:00 1997   | 05:00:00 | Mon Feb 10 22:32:00 1997   | Mon Feb 10 12:32:00 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 00:01:00 | Mon Feb 10 17:33:01 1997   | Mon Feb 10 17:31:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01 1997   | 05:00:00 | Mon Feb 10 22:32:01 1997   | Mon Feb 10 12:32:01 1997
+ Mon Feb 10 17:32:01.4 1997 | 00:01:00 | Mon Feb 10 17:33:01.4 1997 | Mon Feb 10 17:31:01.4 1997
+ Mon Feb 10 17:32:01.4 1997 | 05:00:00 | Mon Feb 10 22:32:01.4 1997 | Mon Feb 10 12:32:01.4 1997
+ Mon Feb 10 17:32:01.5 1997 | 00:01:00 | Mon Feb 10 17:33:01.5 1997 | Mon Feb 10 17:31:01.5 1997
+ Mon Feb 10 17:32:01.5 1997 | 05:00:00 | Mon Feb 10 22:32:01.5 1997 | Mon Feb 10 12:32:01.5 1997
+ Mon Feb 10 17:32:01.6 1997 | 00:01:00 | Mon Feb 10 17:33:01.6 1997 | Mon Feb 10 17:31:01.6 1997
+ Mon Feb 10 17:32:01.6 1997 | 05:00:00 | Mon Feb 10 22:32:01.6 1997 | Mon Feb 10 12:32:01.6 1997
+ Mon Feb 10 17:32:02 1997   | 00:01:00 | Mon Feb 10 17:33:02 1997   | Mon Feb 10 17:31:02 1997
+ Mon Feb 10 17:32:02 1997   | 05:00:00 | Mon Feb 10 22:32:02 1997   | Mon Feb 10 12:32:02 1997
+ Tue Feb 11 17:32:01 1997   | 00:01:00 | Tue Feb 11 17:33:01 1997   | Tue Feb 11 17:31:01 1997
+ Tue Feb 11 17:32:01 1997   | 05:00:00 | Tue Feb 11 22:32:01 1997   | Tue Feb 11 12:32:01 1997
+ Wed Feb 12 17:32:01 1997   | 00:01:00 | Wed Feb 12 17:33:01 1997   | Wed Feb 12 17:31:01 1997
+ Wed Feb 12 17:32:01 1997   | 05:00:00 | Wed Feb 12 22:32:01 1997   | Wed Feb 12 12:32:01 1997
+ Thu Feb 13 17:32:01 1997   | 00:01:00 | Thu Feb 13 17:33:01 1997   | Thu Feb 13 17:31:01 1997
+ Thu Feb 13 17:32:01 1997   | 05:00:00 | Thu Feb 13 22:32:01 1997   | Thu Feb 13 12:32:01 1997
+ Fri Feb 14 17:32:01 1997   | 00:01:00 | Fri Feb 14 17:33:01 1997   | Fri Feb 14 17:31:01 1997
+ Fri Feb 14 17:32:01 1997   | 05:00:00 | Fri Feb 14 22:32:01 1997   | Fri Feb 14 12:32:01 1997
+ Sat Feb 15 17:32:01 1997   | 00:01:00 | Sat Feb 15 17:33:01 1997   | Sat Feb 15 17:31:01 1997
+ Sat Feb 15 17:32:01 1997   | 05:00:00 | Sat Feb 15 22:32:01 1997   | Sat Feb 15 12:32:01 1997
+ Sun Feb 16 17:32:01 1997   | 00:01:00 | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
+ Sun Feb 16 17:32:01 1997   | 00:01:00 | Sun Feb 16 17:33:01 1997   | Sun Feb 16 17:31:01 1997
+ Sun Feb 16 17:32:01 1997   | 05:00:00 | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
+ Sun Feb 16 17:32:01 1997   | 05:00:00 | Sun Feb 16 22:32:01 1997   | Sun Feb 16 12:32:01 1997
+ Fri Feb 28 17:32:01 1997   | 00:01:00 | Fri Feb 28 17:33:01 1997   | Fri Feb 28 17:31:01 1997
+ Fri Feb 28 17:32:01 1997   | 05:00:00 | Fri Feb 28 22:32:01 1997   | Fri Feb 28 12:32:01 1997
+ Sat Mar 01 17:32:01 1997   | 00:01:00 | Sat Mar 01 17:33:01 1997   | Sat Mar 01 17:31:01 1997
+ Sat Mar 01 17:32:01 1997   | 05:00:00 | Sat Mar 01 22:32:01 1997   | Sat Mar 01 12:32:01 1997
+ Tue Jun 10 17:32:01 1997   | 00:01:00 | Tue Jun 10 17:33:01 1997   | Tue Jun 10 17:31:01 1997
+ Tue Jun 10 17:32:01 1997   | 05:00:00 | Tue Jun 10 22:32:01 1997   | Tue Jun 10 12:32:01 1997
+ Tue Jun 10 18:32:01 1997   | 00:01:00 | Tue Jun 10 18:33:01 1997   | Tue Jun 10 18:31:01 1997
+ Tue Jun 10 18:32:01 1997   | 05:00:00 | Tue Jun 10 23:32:01 1997   | Tue Jun 10 13:32:01 1997
+ Thu Oct 02 17:32:01 1997   | 00:01:00 | Thu Oct 02 17:33:01 1997   | Thu Oct 02 17:31:01 1997
+ Thu Oct 02 17:32:01 1997   | 05:00:00 | Thu Oct 02 22:32:01 1997   | Thu Oct 02 12:32:01 1997
+ Tue Dec 30 17:32:01 1997   | 00:01:00 | Tue Dec 30 17:33:01 1997   | Tue Dec 30 17:31:01 1997
+ Tue Dec 30 17:32:01 1997   | 05:00:00 | Tue Dec 30 22:32:01 1997   | Tue Dec 30 12:32:01 1997
+ Wed Dec 31 17:32:01 1997   | 00:01:00 | Wed Dec 31 17:33:01 1997   | Wed Dec 31 17:31:01 1997
+ Wed Dec 31 17:32:01 1997   | 05:00:00 | Wed Dec 31 22:32:01 1997   | Wed Dec 31 12:32:01 1997
+ Fri Dec 31 17:32:01 1999   | 00:01:00 | Fri Dec 31 17:33:01 1999   | Fri Dec 31 17:31:01 1999
+ Fri Dec 31 17:32:01 1999   | 05:00:00 | Fri Dec 31 22:32:01 1999   | Fri Dec 31 12:32:01 1999
+ Sat Jan 01 17:32:01 2000   | 00:01:00 | Sat Jan 01 17:33:01 2000   | Sat Jan 01 17:31:01 2000
+ Sat Jan 01 17:32:01 2000   | 05:00:00 | Sat Jan 01 22:32:01 2000   | Sat Jan 01 12:32:01 2000
+ Wed Mar 15 02:14:05 2000   | 00:01:00 | Wed Mar 15 02:15:05 2000   | Wed Mar 15 02:13:05 2000
+ Wed Mar 15 02:14:05 2000   | 05:00:00 | Wed Mar 15 07:14:05 2000   | Tue Mar 14 21:14:05 2000
+ Wed Mar 15 03:14:04 2000   | 00:01:00 | Wed Mar 15 03:15:04 2000   | Wed Mar 15 03:13:04 2000
+ Wed Mar 15 03:14:04 2000   | 05:00:00 | Wed Mar 15 08:14:04 2000   | Tue Mar 14 22:14:04 2000
+ Wed Mar 15 08:14:01 2000   | 00:01:00 | Wed Mar 15 08:15:01 2000   | Wed Mar 15 08:13:01 2000
+ Wed Mar 15 08:14:01 2000   | 05:00:00 | Wed Mar 15 13:14:01 2000   | Wed Mar 15 03:14:01 2000
+ Wed Mar 15 12:14:03 2000   | 00:01:00 | Wed Mar 15 12:15:03 2000   | Wed Mar 15 12:13:03 2000
+ Wed Mar 15 12:14:03 2000   | 05:00:00 | Wed Mar 15 17:14:03 2000   | Wed Mar 15 07:14:03 2000
+ Wed Mar 15 13:14:02 2000   | 00:01:00 | Wed Mar 15 13:15:02 2000   | Wed Mar 15 13:13:02 2000
+ Wed Mar 15 13:14:02 2000   | 05:00:00 | Wed Mar 15 18:14:02 2000   | Wed Mar 15 08:14:02 2000
+ Sun Dec 31 17:32:01 2000   | 00:01:00 | Sun Dec 31 17:33:01 2000   | Sun Dec 31 17:31:01 2000
+ Sun Dec 31 17:32:01 2000   | 05:00:00 | Sun Dec 31 22:32:01 2000   | Sun Dec 31 12:32:01 2000
 (104 rows)
 
 SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS "add", t.f1 - i.f1 AS "subtract"
   FROM TIME_TBL t, INTERVAL_TBL i
   ORDER BY 1,2;
-      t      |               i               |     add     |  subtract   
--------------+-------------------------------+-------------+-------------
- 00:00:00    | @ 14 secs ago                 | 23:59:46    | 00:00:14
- 00:00:00    | @ 1 min                       | 00:01:00    | 23:59:00
- 00:00:00    | @ 5 hours                     | 05:00:00    | 19:00:00
- 00:00:00    | @ 1 day 2 hours 3 mins 4 secs | 02:03:04    | 21:56:56
- 00:00:00    | @ 10 days                     | 00:00:00    | 00:00:00
- 00:00:00    | @ 3 mons                      | 00:00:00    | 00:00:00
- 00:00:00    | @ 5 mons                      | 00:00:00    | 00:00:00
- 00:00:00    | @ 5 mons 12 hours             | 12:00:00    | 12:00:00
- 00:00:00    | @ 6 years                     | 00:00:00    | 00:00:00
- 00:00:00    | @ 34 years                    | 00:00:00    | 00:00:00
- 01:00:00    | @ 14 secs ago                 | 00:59:46    | 01:00:14
- 01:00:00    | @ 1 min                       | 01:01:00    | 00:59:00
- 01:00:00    | @ 5 hours                     | 06:00:00    | 20:00:00
- 01:00:00    | @ 1 day 2 hours 3 mins 4 secs | 03:03:04    | 22:56:56
- 01:00:00    | @ 10 days                     | 01:00:00    | 01:00:00
- 01:00:00    | @ 3 mons                      | 01:00:00    | 01:00:00
- 01:00:00    | @ 5 mons                      | 01:00:00    | 01:00:00
- 01:00:00    | @ 5 mons 12 hours             | 13:00:00    | 13:00:00
- 01:00:00    | @ 6 years                     | 01:00:00    | 01:00:00
- 01:00:00    | @ 34 years                    | 01:00:00    | 01:00:00
- 02:03:00    | @ 14 secs ago                 | 02:02:46    | 02:03:14
- 02:03:00    | @ 1 min                       | 02:04:00    | 02:02:00
- 02:03:00    | @ 5 hours                     | 07:03:00    | 21:03:00
- 02:03:00    | @ 1 day 2 hours 3 mins 4 secs | 04:06:04    | 23:59:56
- 02:03:00    | @ 10 days                     | 02:03:00    | 02:03:00
- 02:03:00    | @ 3 mons                      | 02:03:00    | 02:03:00
- 02:03:00    | @ 5 mons                      | 02:03:00    | 02:03:00
- 02:03:00    | @ 5 mons 12 hours             | 14:03:00    | 14:03:00
- 02:03:00    | @ 6 years                     | 02:03:00    | 02:03:00
- 02:03:00    | @ 34 years                    | 02:03:00    | 02:03:00
- 11:59:00    | @ 14 secs ago                 | 11:58:46    | 11:59:14
- 11:59:00    | @ 1 min                       | 12:00:00    | 11:58:00
- 11:59:00    | @ 5 hours                     | 16:59:00    | 06:59:00
- 11:59:00    | @ 1 day 2 hours 3 mins 4 secs | 14:02:04    | 09:55:56
- 11:59:00    | @ 10 days                     | 11:59:00    | 11:59:00
- 11:59:00    | @ 3 mons                      | 11:59:00    | 11:59:00
- 11:59:00    | @ 5 mons                      | 11:59:00    | 11:59:00
- 11:59:00    | @ 5 mons 12 hours             | 23:59:00    | 23:59:00
- 11:59:00    | @ 6 years                     | 11:59:00    | 11:59:00
- 11:59:00    | @ 34 years                    | 11:59:00    | 11:59:00
- 12:00:00    | @ 14 secs ago                 | 11:59:46    | 12:00:14
- 12:00:00    | @ 1 min                       | 12:01:00    | 11:59:00
- 12:00:00    | @ 5 hours                     | 17:00:00    | 07:00:00
- 12:00:00    | @ 1 day 2 hours 3 mins 4 secs | 14:03:04    | 09:56:56
- 12:00:00    | @ 10 days                     | 12:00:00    | 12:00:00
- 12:00:00    | @ 3 mons                      | 12:00:00    | 12:00:00
- 12:00:00    | @ 5 mons                      | 12:00:00    | 12:00:00
- 12:00:00    | @ 5 mons 12 hours             | 00:00:00    | 00:00:00
- 12:00:00    | @ 6 years                     | 12:00:00    | 12:00:00
- 12:00:00    | @ 34 years                    | 12:00:00    | 12:00:00
- 12:01:00    | @ 14 secs ago                 | 12:00:46    | 12:01:14
- 12:01:00    | @ 1 min                       | 12:02:00    | 12:00:00
- 12:01:00    | @ 5 hours                     | 17:01:00    | 07:01:00
- 12:01:00    | @ 1 day 2 hours 3 mins 4 secs | 14:04:04    | 09:57:56
- 12:01:00    | @ 10 days                     | 12:01:00    | 12:01:00
- 12:01:00    | @ 3 mons                      | 12:01:00    | 12:01:00
- 12:01:00    | @ 5 mons                      | 12:01:00    | 12:01:00
- 12:01:00    | @ 5 mons 12 hours             | 00:01:00    | 00:01:00
- 12:01:00    | @ 6 years                     | 12:01:00    | 12:01:00
- 12:01:00    | @ 34 years                    | 12:01:00    | 12:01:00
- 15:36:39    | @ 14 secs ago                 | 15:36:25    | 15:36:53
- 15:36:39    | @ 14 secs ago                 | 15:36:25    | 15:36:53
- 15:36:39    | @ 1 min                       | 15:37:39    | 15:35:39
- 15:36:39    | @ 1 min                       | 15:37:39    | 15:35:39
- 15:36:39    | @ 5 hours                     | 20:36:39    | 10:36:39
- 15:36:39    | @ 5 hours                     | 20:36:39    | 10:36:39
- 15:36:39    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43    | 13:33:35
- 15:36:39    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43    | 13:33:35
- 15:36:39    | @ 10 days                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 10 days                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 3 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 3 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 5 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 5 mons                      | 15:36:39    | 15:36:39
- 15:36:39    | @ 5 mons 12 hours             | 03:36:39    | 03:36:39
- 15:36:39    | @ 5 mons 12 hours             | 03:36:39    | 03:36:39
- 15:36:39    | @ 6 years                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 6 years                     | 15:36:39    | 15:36:39
- 15:36:39    | @ 34 years                    | 15:36:39    | 15:36:39
- 15:36:39    | @ 34 years                    | 15:36:39    | 15:36:39
- 23:59:00    | @ 14 secs ago                 | 23:58:46    | 23:59:14
- 23:59:00    | @ 1 min                       | 00:00:00    | 23:58:00
- 23:59:00    | @ 5 hours                     | 04:59:00    | 18:59:00
- 23:59:00    | @ 1 day 2 hours 3 mins 4 secs | 02:02:04    | 21:55:56
- 23:59:00    | @ 10 days                     | 23:59:00    | 23:59:00
- 23:59:00    | @ 3 mons                      | 23:59:00    | 23:59:00
- 23:59:00    | @ 5 mons                      | 23:59:00    | 23:59:00
- 23:59:00    | @ 5 mons 12 hours             | 11:59:00    | 11:59:00
- 23:59:00    | @ 6 years                     | 23:59:00    | 23:59:00
- 23:59:00    | @ 34 years                    | 23:59:00    | 23:59:00
- 23:59:59.99 | @ 14 secs ago                 | 23:59:45.99 | 00:00:13.99
- 23:59:59.99 | @ 1 min                       | 00:00:59.99 | 23:58:59.99
- 23:59:59.99 | @ 5 hours                     | 04:59:59.99 | 18:59:59.99
- 23:59:59.99 | @ 1 day 2 hours 3 mins 4 secs | 02:03:03.99 | 21:56:55.99
- 23:59:59.99 | @ 10 days                     | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 3 mons                      | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 5 mons                      | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 5 mons 12 hours             | 11:59:59.99 | 11:59:59.99
- 23:59:59.99 | @ 6 years                     | 23:59:59.99 | 23:59:59.99
- 23:59:59.99 | @ 34 years                    | 23:59:59.99 | 23:59:59.99
+      t      |        i        |     add     |  subtract   
+-------------+-----------------+-------------+-------------
+ 00:00:00    | -00:00:14       | 23:59:46    | 00:00:14
+ 00:00:00    | 00:01:00        | 00:01:00    | 23:59:00
+ 00:00:00    | 05:00:00        | 05:00:00    | 19:00:00
+ 00:00:00    | 1 day 02:03:04  | 02:03:04    | 21:56:56
+ 00:00:00    | 10 days         | 00:00:00    | 00:00:00
+ 00:00:00    | 3 mons          | 00:00:00    | 00:00:00
+ 00:00:00    | 5 mons          | 00:00:00    | 00:00:00
+ 00:00:00    | 5 mons 12:00:00 | 12:00:00    | 12:00:00
+ 00:00:00    | 6 years         | 00:00:00    | 00:00:00
+ 00:00:00    | 34 years        | 00:00:00    | 00:00:00
+ 01:00:00    | -00:00:14       | 00:59:46    | 01:00:14
+ 01:00:00    | 00:01:00        | 01:01:00    | 00:59:00
+ 01:00:00    | 05:00:00        | 06:00:00    | 20:00:00
+ 01:00:00    | 1 day 02:03:04  | 03:03:04    | 22:56:56
+ 01:00:00    | 10 days         | 01:00:00    | 01:00:00
+ 01:00:00    | 3 mons          | 01:00:00    | 01:00:00
+ 01:00:00    | 5 mons          | 01:00:00    | 01:00:00
+ 01:00:00    | 5 mons 12:00:00 | 13:00:00    | 13:00:00
+ 01:00:00    | 6 years         | 01:00:00    | 01:00:00
+ 01:00:00    | 34 years        | 01:00:00    | 01:00:00
+ 02:03:00    | -00:00:14       | 02:02:46    | 02:03:14
+ 02:03:00    | 00:01:00        | 02:04:00    | 02:02:00
+ 02:03:00    | 05:00:00        | 07:03:00    | 21:03:00
+ 02:03:00    | 1 day 02:03:04  | 04:06:04    | 23:59:56
+ 02:03:00    | 10 days         | 02:03:00    | 02:03:00
+ 02:03:00    | 3 mons          | 02:03:00    | 02:03:00
+ 02:03:00    | 5 mons          | 02:03:00    | 02:03:00
+ 02:03:00    | 5 mons 12:00:00 | 14:03:00    | 14:03:00
+ 02:03:00    | 6 years         | 02:03:00    | 02:03:00
+ 02:03:00    | 34 years        | 02:03:00    | 02:03:00
+ 11:59:00    | -00:00:14       | 11:58:46    | 11:59:14
+ 11:59:00    | 00:01:00        | 12:00:00    | 11:58:00
+ 11:59:00    | 05:00:00        | 16:59:00    | 06:59:00
+ 11:59:00    | 1 day 02:03:04  | 14:02:04    | 09:55:56
+ 11:59:00    | 10 days         | 11:59:00    | 11:59:00
+ 11:59:00    | 3 mons          | 11:59:00    | 11:59:00
+ 11:59:00    | 5 mons          | 11:59:00    | 11:59:00
+ 11:59:00    | 5 mons 12:00:00 | 23:59:00    | 23:59:00
+ 11:59:00    | 6 years         | 11:59:00    | 11:59:00
+ 11:59:00    | 34 years        | 11:59:00    | 11:59:00
+ 12:00:00    | -00:00:14       | 11:59:46    | 12:00:14
+ 12:00:00    | 00:01:00        | 12:01:00    | 11:59:00
+ 12:00:00    | 05:00:00        | 17:00:00    | 07:00:00
+ 12:00:00    | 1 day 02:03:04  | 14:03:04    | 09:56:56
+ 12:00:00    | 10 days         | 12:00:00    | 12:00:00
+ 12:00:00    | 3 mons          | 12:00:00    | 12:00:00
+ 12:00:00    | 5 mons          | 12:00:00    | 12:00:00
+ 12:00:00    | 5 mons 12:00:00 | 00:00:00    | 00:00:00
+ 12:00:00    | 6 years         | 12:00:00    | 12:00:00
+ 12:00:00    | 34 years        | 12:00:00    | 12:00:00
+ 12:01:00    | -00:00:14       | 12:00:46    | 12:01:14
+ 12:01:00    | 00:01:00        | 12:02:00    | 12:00:00
+ 12:01:00    | 05:00:00        | 17:01:00    | 07:01:00
+ 12:01:00    | 1 day 02:03:04  | 14:04:04    | 09:57:56
+ 12:01:00    | 10 days         | 12:01:00    | 12:01:00
+ 12:01:00    | 3 mons          | 12:01:00    | 12:01:00
+ 12:01:00    | 5 mons          | 12:01:00    | 12:01:00
+ 12:01:00    | 5 mons 12:00:00 | 00:01:00    | 00:01:00
+ 12:01:00    | 6 years         | 12:01:00    | 12:01:00
+ 12:01:00    | 34 years        | 12:01:00    | 12:01:00
+ 15:36:39    | -00:00:14       | 15:36:25    | 15:36:53
+ 15:36:39    | -00:00:14       | 15:36:25    | 15:36:53
+ 15:36:39    | 00:01:00        | 15:37:39    | 15:35:39
+ 15:36:39    | 00:01:00        | 15:37:39    | 15:35:39
+ 15:36:39    | 05:00:00        | 20:36:39    | 10:36:39
+ 15:36:39    | 05:00:00        | 20:36:39    | 10:36:39
+ 15:36:39    | 1 day 02:03:04  | 17:39:43    | 13:33:35
+ 15:36:39    | 1 day 02:03:04  | 17:39:43    | 13:33:35
+ 15:36:39    | 10 days         | 15:36:39    | 15:36:39
+ 15:36:39    | 10 days         | 15:36:39    | 15:36:39
+ 15:36:39    | 3 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 3 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 5 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 5 mons          | 15:36:39    | 15:36:39
+ 15:36:39    | 5 mons 12:00:00 | 03:36:39    | 03:36:39
+ 15:36:39    | 5 mons 12:00:00 | 03:36:39    | 03:36:39
+ 15:36:39    | 6 years         | 15:36:39    | 15:36:39
+ 15:36:39    | 6 years         | 15:36:39    | 15:36:39
+ 15:36:39    | 34 years        | 15:36:39    | 15:36:39
+ 15:36:39    | 34 years        | 15:36:39    | 15:36:39
+ 23:59:00    | -00:00:14       | 23:58:46    | 23:59:14
+ 23:59:00    | 00:01:00        | 00:00:00    | 23:58:00
+ 23:59:00    | 05:00:00        | 04:59:00    | 18:59:00
+ 23:59:00    | 1 day 02:03:04  | 02:02:04    | 21:55:56
+ 23:59:00    | 10 days         | 23:59:00    | 23:59:00
+ 23:59:00    | 3 mons          | 23:59:00    | 23:59:00
+ 23:59:00    | 5 mons          | 23:59:00    | 23:59:00
+ 23:59:00    | 5 mons 12:00:00 | 11:59:00    | 11:59:00
+ 23:59:00    | 6 years         | 23:59:00    | 23:59:00
+ 23:59:00    | 34 years        | 23:59:00    | 23:59:00
+ 23:59:59.99 | -00:00:14       | 23:59:45.99 | 00:00:13.99
+ 23:59:59.99 | 00:01:00        | 00:00:59.99 | 23:58:59.99
+ 23:59:59.99 | 05:00:00        | 04:59:59.99 | 18:59:59.99
+ 23:59:59.99 | 1 day 02:03:04  | 02:03:03.99 | 21:56:55.99
+ 23:59:59.99 | 10 days         | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 3 mons          | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 5 mons          | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 5 mons 12:00:00 | 11:59:59.99 | 11:59:59.99
+ 23:59:59.99 | 6 years         | 23:59:59.99 | 23:59:59.99
+ 23:59:59.99 | 34 years        | 23:59:59.99 | 23:59:59.99
 (100 rows)
 
 SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS "add", t.f1 - i.f1 AS "subtract"
   FROM TIMETZ_TBL t, INTERVAL_TBL i
   ORDER BY 1,2;
-       t        |               i               |      add       |    subtract    
-----------------+-------------------------------+----------------+----------------
- 00:01:00-07    | @ 14 secs ago                 | 00:00:46-07    | 00:01:14-07
- 00:01:00-07    | @ 1 min                       | 00:02:00-07    | 00:00:00-07
- 00:01:00-07    | @ 5 hours                     | 05:01:00-07    | 19:01:00-07
- 00:01:00-07    | @ 1 day 2 hours 3 mins 4 secs | 02:04:04-07    | 21:57:56-07
- 00:01:00-07    | @ 10 days                     | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 3 mons                      | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 5 mons                      | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 5 mons 12 hours             | 12:01:00-07    | 12:01:00-07
- 00:01:00-07    | @ 6 years                     | 00:01:00-07    | 00:01:00-07
- 00:01:00-07    | @ 34 years                    | 00:01:00-07    | 00:01:00-07
- 01:00:00-07    | @ 14 secs ago                 | 00:59:46-07    | 01:00:14-07
- 01:00:00-07    | @ 1 min                       | 01:01:00-07    | 00:59:00-07
- 01:00:00-07    | @ 5 hours                     | 06:00:00-07    | 20:00:00-07
- 01:00:00-07    | @ 1 day 2 hours 3 mins 4 secs | 03:03:04-07    | 22:56:56-07
- 01:00:00-07    | @ 10 days                     | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 3 mons                      | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 5 mons                      | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 5 mons 12 hours             | 13:00:00-07    | 13:00:00-07
- 01:00:00-07    | @ 6 years                     | 01:00:00-07    | 01:00:00-07
- 01:00:00-07    | @ 34 years                    | 01:00:00-07    | 01:00:00-07
- 02:03:00-07    | @ 14 secs ago                 | 02:02:46-07    | 02:03:14-07
- 02:03:00-07    | @ 1 min                       | 02:04:00-07    | 02:02:00-07
- 02:03:00-07    | @ 5 hours                     | 07:03:00-07    | 21:03:00-07
- 02:03:00-07    | @ 1 day 2 hours 3 mins 4 secs | 04:06:04-07    | 23:59:56-07
- 02:03:00-07    | @ 10 days                     | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 3 mons                      | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 5 mons                      | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 5 mons 12 hours             | 14:03:00-07    | 14:03:00-07
- 02:03:00-07    | @ 6 years                     | 02:03:00-07    | 02:03:00-07
- 02:03:00-07    | @ 34 years                    | 02:03:00-07    | 02:03:00-07
- 08:08:00-04    | @ 14 secs ago                 | 08:07:46-04    | 08:08:14-04
- 08:08:00-04    | @ 1 min                       | 08:09:00-04    | 08:07:00-04
- 08:08:00-04    | @ 5 hours                     | 13:08:00-04    | 03:08:00-04
- 08:08:00-04    | @ 1 day 2 hours 3 mins 4 secs | 10:11:04-04    | 06:04:56-04
- 08:08:00-04    | @ 10 days                     | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 3 mons                      | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 5 mons                      | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 5 mons 12 hours             | 20:08:00-04    | 20:08:00-04
- 08:08:00-04    | @ 6 years                     | 08:08:00-04    | 08:08:00-04
- 08:08:00-04    | @ 34 years                    | 08:08:00-04    | 08:08:00-04
- 07:07:00-08    | @ 14 secs ago                 | 07:06:46-08    | 07:07:14-08
- 07:07:00-08    | @ 1 min                       | 07:08:00-08    | 07:06:00-08
- 07:07:00-08    | @ 5 hours                     | 12:07:00-08    | 02:07:00-08
- 07:07:00-08    | @ 1 day 2 hours 3 mins 4 secs | 09:10:04-08    | 05:03:56-08
- 07:07:00-08    | @ 10 days                     | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 3 mons                      | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 5 mons                      | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 5 mons 12 hours             | 19:07:00-08    | 19:07:00-08
- 07:07:00-08    | @ 6 years                     | 07:07:00-08    | 07:07:00-08
- 07:07:00-08    | @ 34 years                    | 07:07:00-08    | 07:07:00-08
- 11:59:00-07    | @ 14 secs ago                 | 11:58:46-07    | 11:59:14-07
- 11:59:00-07    | @ 1 min                       | 12:00:00-07    | 11:58:00-07
- 11:59:00-07    | @ 5 hours                     | 16:59:00-07    | 06:59:00-07
- 11:59:00-07    | @ 1 day 2 hours 3 mins 4 secs | 14:02:04-07    | 09:55:56-07
- 11:59:00-07    | @ 10 days                     | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 3 mons                      | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 5 mons                      | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 5 mons 12 hours             | 23:59:00-07    | 23:59:00-07
- 11:59:00-07    | @ 6 years                     | 11:59:00-07    | 11:59:00-07
- 11:59:00-07    | @ 34 years                    | 11:59:00-07    | 11:59:00-07
- 12:00:00-07    | @ 14 secs ago                 | 11:59:46-07    | 12:00:14-07
- 12:00:00-07    | @ 1 min                       | 12:01:00-07    | 11:59:00-07
- 12:00:00-07    | @ 5 hours                     | 17:00:00-07    | 07:00:00-07
- 12:00:00-07    | @ 1 day 2 hours 3 mins 4 secs | 14:03:04-07    | 09:56:56-07
- 12:00:00-07    | @ 10 days                     | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 3 mons                      | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 5 mons                      | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 5 mons 12 hours             | 00:00:00-07    | 00:00:00-07
- 12:00:00-07    | @ 6 years                     | 12:00:00-07    | 12:00:00-07
- 12:00:00-07    | @ 34 years                    | 12:00:00-07    | 12:00:00-07
- 12:01:00-07    | @ 14 secs ago                 | 12:00:46-07    | 12:01:14-07
- 12:01:00-07    | @ 1 min                       | 12:02:00-07    | 12:00:00-07
- 12:01:00-07    | @ 5 hours                     | 17:01:00-07    | 07:01:00-07
- 12:01:00-07    | @ 1 day 2 hours 3 mins 4 secs | 14:04:04-07    | 09:57:56-07
- 12:01:00-07    | @ 10 days                     | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 3 mons                      | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 5 mons                      | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 5 mons 12 hours             | 00:01:00-07    | 00:01:00-07
- 12:01:00-07    | @ 6 years                     | 12:01:00-07    | 12:01:00-07
- 12:01:00-07    | @ 34 years                    | 12:01:00-07    | 12:01:00-07
- 15:36:39-04    | @ 14 secs ago                 | 15:36:25-04    | 15:36:53-04
- 15:36:39-04    | @ 1 min                       | 15:37:39-04    | 15:35:39-04
- 15:36:39-04    | @ 5 hours                     | 20:36:39-04    | 10:36:39-04
- 15:36:39-04    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43-04    | 13:33:35-04
- 15:36:39-04    | @ 10 days                     | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 3 mons                      | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 5 mons                      | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 5 mons 12 hours             | 03:36:39-04    | 03:36:39-04
- 15:36:39-04    | @ 6 years                     | 15:36:39-04    | 15:36:39-04
- 15:36:39-04    | @ 34 years                    | 15:36:39-04    | 15:36:39-04
- 15:36:39-05    | @ 14 secs ago                 | 15:36:25-05    | 15:36:53-05
- 15:36:39-05    | @ 1 min                       | 15:37:39-05    | 15:35:39-05
- 15:36:39-05    | @ 5 hours                     | 20:36:39-05    | 10:36:39-05
- 15:36:39-05    | @ 1 day 2 hours 3 mins 4 secs | 17:39:43-05    | 13:33:35-05
- 15:36:39-05    | @ 10 days                     | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 3 mons                      | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 5 mons                      | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 5 mons 12 hours             | 03:36:39-05    | 03:36:39-05
- 15:36:39-05    | @ 6 years                     | 15:36:39-05    | 15:36:39-05
- 15:36:39-05    | @ 34 years                    | 15:36:39-05    | 15:36:39-05
- 23:59:00-07    | @ 14 secs ago                 | 23:58:46-07    | 23:59:14-07
- 23:59:00-07    | @ 1 min                       | 00:00:00-07    | 23:58:00-07
- 23:59:00-07    | @ 5 hours                     | 04:59:00-07    | 18:59:00-07
- 23:59:00-07    | @ 1 day 2 hours 3 mins 4 secs | 02:02:04-07    | 21:55:56-07
- 23:59:00-07    | @ 10 days                     | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 3 mons                      | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 5 mons                      | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 5 mons 12 hours             | 11:59:00-07    | 11:59:00-07
- 23:59:00-07    | @ 6 years                     | 23:59:00-07    | 23:59:00-07
- 23:59:00-07    | @ 34 years                    | 23:59:00-07    | 23:59:00-07
- 23:59:59.99-07 | @ 14 secs ago                 | 23:59:45.99-07 | 00:00:13.99-07
- 23:59:59.99-07 | @ 1 min                       | 00:00:59.99-07 | 23:58:59.99-07
- 23:59:59.99-07 | @ 5 hours                     | 04:59:59.99-07 | 18:59:59.99-07
- 23:59:59.99-07 | @ 1 day 2 hours 3 mins 4 secs | 02:03:03.99-07 | 21:56:55.99-07
- 23:59:59.99-07 | @ 10 days                     | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 3 mons                      | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 5 mons                      | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 5 mons 12 hours             | 11:59:59.99-07 | 11:59:59.99-07
- 23:59:59.99-07 | @ 6 years                     | 23:59:59.99-07 | 23:59:59.99-07
- 23:59:59.99-07 | @ 34 years                    | 23:59:59.99-07 | 23:59:59.99-07
+       t        |        i        |      add       |    subtract    
+----------------+-----------------+----------------+----------------
+ 00:01:00-07    | -00:00:14       | 00:00:46-07    | 00:01:14-07
+ 00:01:00-07    | 00:01:00        | 00:02:00-07    | 00:00:00-07
+ 00:01:00-07    | 05:00:00        | 05:01:00-07    | 19:01:00-07
+ 00:01:00-07    | 1 day 02:03:04  | 02:04:04-07    | 21:57:56-07
+ 00:01:00-07    | 10 days         | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 3 mons          | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 5 mons          | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 5 mons 12:00:00 | 12:01:00-07    | 12:01:00-07
+ 00:01:00-07    | 6 years         | 00:01:00-07    | 00:01:00-07
+ 00:01:00-07    | 34 years        | 00:01:00-07    | 00:01:00-07
+ 01:00:00-07    | -00:00:14       | 00:59:46-07    | 01:00:14-07
+ 01:00:00-07    | 00:01:00        | 01:01:00-07    | 00:59:00-07
+ 01:00:00-07    | 05:00:00        | 06:00:00-07    | 20:00:00-07
+ 01:00:00-07    | 1 day 02:03:04  | 03:03:04-07    | 22:56:56-07
+ 01:00:00-07    | 10 days         | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 3 mons          | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 5 mons          | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 5 mons 12:00:00 | 13:00:00-07    | 13:00:00-07
+ 01:00:00-07    | 6 years         | 01:00:00-07    | 01:00:00-07
+ 01:00:00-07    | 34 years        | 01:00:00-07    | 01:00:00-07
+ 02:03:00-07    | -00:00:14       | 02:02:46-07    | 02:03:14-07
+ 02:03:00-07    | 00:01:00        | 02:04:00-07    | 02:02:00-07
+ 02:03:00-07    | 05:00:00        | 07:03:00-07    | 21:03:00-07
+ 02:03:00-07    | 1 day 02:03:04  | 04:06:04-07    | 23:59:56-07
+ 02:03:00-07    | 10 days         | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 3 mons          | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 5 mons          | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 5 mons 12:00:00 | 14:03:00-07    | 14:03:00-07
+ 02:03:00-07    | 6 years         | 02:03:00-07    | 02:03:00-07
+ 02:03:00-07    | 34 years        | 02:03:00-07    | 02:03:00-07
+ 08:08:00-04    | -00:00:14       | 08:07:46-04    | 08:08:14-04
+ 08:08:00-04    | 00:01:00        | 08:09:00-04    | 08:07:00-04
+ 08:08:00-04    | 05:00:00        | 13:08:00-04    | 03:08:00-04
+ 08:08:00-04    | 1 day 02:03:04  | 10:11:04-04    | 06:04:56-04
+ 08:08:00-04    | 10 days         | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 3 mons          | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 5 mons          | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 5 mons 12:00:00 | 20:08:00-04    | 20:08:00-04
+ 08:08:00-04    | 6 years         | 08:08:00-04    | 08:08:00-04
+ 08:08:00-04    | 34 years        | 08:08:00-04    | 08:08:00-04
+ 07:07:00-08    | -00:00:14       | 07:06:46-08    | 07:07:14-08
+ 07:07:00-08    | 00:01:00        | 07:08:00-08    | 07:06:00-08
+ 07:07:00-08    | 05:00:00        | 12:07:00-08    | 02:07:00-08
+ 07:07:00-08    | 1 day 02:03:04  | 09:10:04-08    | 05:03:56-08
+ 07:07:00-08    | 10 days         | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 3 mons          | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 5 mons          | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 5 mons 12:00:00 | 19:07:00-08    | 19:07:00-08
+ 07:07:00-08    | 6 years         | 07:07:00-08    | 07:07:00-08
+ 07:07:00-08    | 34 years        | 07:07:00-08    | 07:07:00-08
+ 11:59:00-07    | -00:00:14       | 11:58:46-07    | 11:59:14-07
+ 11:59:00-07    | 00:01:00        | 12:00:00-07    | 11:58:00-07
+ 11:59:00-07    | 05:00:00        | 16:59:00-07    | 06:59:00-07
+ 11:59:00-07    | 1 day 02:03:04  | 14:02:04-07    | 09:55:56-07
+ 11:59:00-07    | 10 days         | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 3 mons          | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 5 mons          | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 5 mons 12:00:00 | 23:59:00-07    | 23:59:00-07
+ 11:59:00-07    | 6 years         | 11:59:00-07    | 11:59:00-07
+ 11:59:00-07    | 34 years        | 11:59:00-07    | 11:59:00-07
+ 12:00:00-07    | -00:00:14       | 11:59:46-07    | 12:00:14-07
+ 12:00:00-07    | 00:01:00        | 12:01:00-07    | 11:59:00-07
+ 12:00:00-07    | 05:00:00        | 17:00:00-07    | 07:00:00-07
+ 12:00:00-07    | 1 day 02:03:04  | 14:03:04-07    | 09:56:56-07
+ 12:00:00-07    | 10 days         | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 3 mons          | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 5 mons          | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 5 mons 12:00:00 | 00:00:00-07    | 00:00:00-07
+ 12:00:00-07    | 6 years         | 12:00:00-07    | 12:00:00-07
+ 12:00:00-07    | 34 years        | 12:00:00-07    | 12:00:00-07
+ 12:01:00-07    | -00:00:14       | 12:00:46-07    | 12:01:14-07
+ 12:01:00-07    | 00:01:00        | 12:02:00-07    | 12:00:00-07
+ 12:01:00-07    | 05:00:00        | 17:01:00-07    | 07:01:00-07
+ 12:01:00-07    | 1 day 02:03:04  | 14:04:04-07    | 09:57:56-07
+ 12:01:00-07    | 10 days         | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 3 mons          | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 5 mons          | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 5 mons 12:00:00 | 00:01:00-07    | 00:01:00-07
+ 12:01:00-07    | 6 years         | 12:01:00-07    | 12:01:00-07
+ 12:01:00-07    | 34 years        | 12:01:00-07    | 12:01:00-07
+ 15:36:39-04    | -00:00:14       | 15:36:25-04    | 15:36:53-04
+ 15:36:39-04    | 00:01:00        | 15:37:39-04    | 15:35:39-04
+ 15:36:39-04    | 05:00:00        | 20:36:39-04    | 10:36:39-04
+ 15:36:39-04    | 1 day 02:03:04  | 17:39:43-04    | 13:33:35-04
+ 15:36:39-04    | 10 days         | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 3 mons          | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 5 mons          | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 5 mons 12:00:00 | 03:36:39-04    | 03:36:39-04
+ 15:36:39-04    | 6 years         | 15:36:39-04    | 15:36:39-04
+ 15:36:39-04    | 34 years        | 15:36:39-04    | 15:36:39-04
+ 15:36:39-05    | -00:00:14       | 15:36:25-05    | 15:36:53-05
+ 15:36:39-05    | 00:01:00        | 15:37:39-05    | 15:35:39-05
+ 15:36:39-05    | 05:00:00        | 20:36:39-05    | 10:36:39-05
+ 15:36:39-05    | 1 day 02:03:04  | 17:39:43-05    | 13:33:35-05
+ 15:36:39-05    | 10 days         | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 3 mons          | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 5 mons          | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 5 mons 12:00:00 | 03:36:39-05    | 03:36:39-05
+ 15:36:39-05    | 6 years         | 15:36:39-05    | 15:36:39-05
+ 15:36:39-05    | 34 years        | 15:36:39-05    | 15:36:39-05
+ 23:59:00-07    | -00:00:14       | 23:58:46-07    | 23:59:14-07
+ 23:59:00-07    | 00:01:00        | 00:00:00-07    | 23:58:00-07
+ 23:59:00-07    | 05:00:00        | 04:59:00-07    | 18:59:00-07
+ 23:59:00-07    | 1 day 02:03:04  | 02:02:04-07    | 21:55:56-07
+ 23:59:00-07    | 10 days         | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 3 mons          | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 5 mons          | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 5 mons 12:00:00 | 11:59:00-07    | 11:59:00-07
+ 23:59:00-07    | 6 years         | 23:59:00-07    | 23:59:00-07
+ 23:59:00-07    | 34 years        | 23:59:00-07    | 23:59:00-07
+ 23:59:59.99-07 | -00:00:14       | 23:59:45.99-07 | 00:00:13.99-07
+ 23:59:59.99-07 | 00:01:00        | 00:00:59.99-07 | 23:58:59.99-07
+ 23:59:59.99-07 | 05:00:00        | 04:59:59.99-07 | 18:59:59.99-07
+ 23:59:59.99-07 | 1 day 02:03:04  | 02:03:03.99-07 | 21:56:55.99-07
+ 23:59:59.99-07 | 10 days         | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 3 mons          | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 5 mons          | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 5 mons 12:00:00 | 11:59:59.99-07 | 11:59:59.99-07
+ 23:59:59.99-07 | 6 years         | 23:59:59.99-07 | 23:59:59.99-07
+ 23:59:59.99-07 | 34 years        | 23:59:59.99-07 | 23:59:59.99-07
 (120 rows)
 
 -- SQL9x OVERLAPS operator
@@ -1405,357 +1403,357 @@
   ORDER BY "timestamp";
  16 |          timestamp           
 ----+------------------------------
-    | Thu Jan 01 00:00:00 1970 PST
-    | Wed Feb 28 17:32:01 1996 PST
-    | Thu Feb 29 17:32:01 1996 PST
-    | Fri Mar 01 17:32:01 1996 PST
-    | Mon Dec 30 17:32:01 1996 PST
-    | Tue Dec 31 17:32:01 1996 PST
-    | Fri Dec 31 17:32:01 1999 PST
-    | Sat Jan 01 17:32:01 2000 PST
-    | Wed Mar 15 02:14:05 2000 PST
-    | Wed Mar 15 03:14:04 2000 PST
-    | Wed Mar 15 08:14:01 2000 PST
-    | Wed Mar 15 12:14:03 2000 PST
-    | Wed Mar 15 13:14:02 2000 PST
-    | Sun Dec 31 17:32:01 2000 PST
-    | Mon Jan 01 17:32:01 2001 PST
-    | Sat Sep 22 18:19:20 2001 PDT
+    | Thu Jan 01 00:00:00 1970 -05
+    | Wed Feb 28 17:32:01 1996 -05
+    | Thu Feb 29 17:32:01 1996 -05
+    | Fri Mar 01 17:32:01 1996 -05
+    | Mon Dec 30 17:32:01 1996 -05
+    | Tue Dec 31 17:32:01 1996 -05
+    | Fri Dec 31 17:32:01 1999 -05
+    | Sat Jan 01 17:32:01 2000 -05
+    | Wed Mar 15 02:14:05 2000 -05
+    | Wed Mar 15 03:14:04 2000 -05
+    | Wed Mar 15 08:14:01 2000 -05
+    | Wed Mar 15 12:14:03 2000 -05
+    | Wed Mar 15 13:14:02 2000 -05
+    | Sun Dec 31 17:32:01 2000 -05
+    | Mon Jan 01 17:32:01 2001 -05
+    | Sat Sep 22 18:19:20 2001 -05
 (16 rows)
 
 SELECT '' AS "160", d.f1 AS "timestamp", t.f1 AS "interval", d.f1 + t.f1 AS plus
   FROM TEMP_TIMESTAMP d, INTERVAL_TBL t
   ORDER BY plus, "timestamp", "interval";
- 160 |          timestamp           |           interval            |             plus             
------+------------------------------+-------------------------------+------------------------------
-     | Thu Jan 01 00:00:00 1970 PST | @ 14 secs ago                 | Wed Dec 31 23:59:46 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 min                       | Thu Jan 01 00:01:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 hours                     | Thu Jan 01 05:00:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 day 2 hours 3 mins 4 secs | Fri Jan 02 02:03:04 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 10 days                     | Sun Jan 11 00:00:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 3 mons                      | Wed Apr 01 00:00:00 1970 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons                      | Mon Jun 01 00:00:00 1970 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons 12 hours             | Mon Jun 01 12:00:00 1970 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 6 years                     | Thu Jan 01 00:00:00 1976 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 14 secs ago                 | Wed Feb 28 17:31:47 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 min                       | Wed Feb 28 17:33:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 hours                     | Wed Feb 28 22:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 14 secs ago                 | Thu Feb 29 17:31:47 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 min                       | Thu Feb 29 17:33:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Feb 29 19:35:05 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 hours                     | Thu Feb 29 22:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 14 secs ago                 | Fri Mar 01 17:31:47 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 min                       | Fri Mar 01 17:33:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Fri Mar 01 19:35:05 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 hours                     | Fri Mar 01 22:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Mar 02 19:35:05 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 10 days                     | Sat Mar 09 17:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 10 days                     | Sun Mar 10 17:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 10 days                     | Mon Mar 11 17:32:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 3 mons                      | Tue May 28 17:32:01 1996 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 3 mons                      | Wed May 29 17:32:01 1996 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 3 mons                      | Sat Jun 01 17:32:01 1996 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons                      | Sun Jul 28 17:32:01 1996 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours             | Mon Jul 29 05:32:01 1996 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons                      | Mon Jul 29 17:32:01 1996 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours             | Tue Jul 30 05:32:01 1996 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons                      | Thu Aug 01 17:32:01 1996 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours             | Fri Aug 02 05:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 14 secs ago                 | Mon Dec 30 17:31:47 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 min                       | Mon Dec 30 17:33:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 hours                     | Mon Dec 30 22:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 14 secs ago                 | Tue Dec 31 17:31:47 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 min                       | Tue Dec 31 17:33:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Dec 31 19:35:05 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 hours                     | Tue Dec 31 22:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Wed Jan 01 19:35:05 1997 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 10 days                     | Thu Jan 09 17:32:01 1997 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 10 days                     | Fri Jan 10 17:32:01 1997 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 3 mons                      | Sun Mar 30 17:32:01 1997 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 3 mons                      | Mon Mar 31 17:32:01 1997 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons                      | Fri May 30 17:32:01 1997 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours             | Sat May 31 05:32:01 1997 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons                      | Sat May 31 17:32:01 1997 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours             | Sun Jun 01 05:32:01 1997 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 14 secs ago                 | Fri Dec 31 17:31:47 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 min                       | Fri Dec 31 17:33:01 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 hours                     | Fri Dec 31 22:32:01 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 14 secs ago                 | Sat Jan 01 17:31:47 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 min                       | Sat Jan 01 17:33:01 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Jan 01 19:35:05 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 hours                     | Sat Jan 01 22:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Jan 02 19:35:05 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 10 days                     | Mon Jan 10 17:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 10 days                     | Tue Jan 11 17:32:01 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 14 secs ago                 | Wed Mar 15 02:13:51 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 min                       | Wed Mar 15 02:15:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 14 secs ago                 | Wed Mar 15 03:13:50 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 min                       | Wed Mar 15 03:15:04 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 hours                     | Wed Mar 15 07:14:05 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 14 secs ago                 | Wed Mar 15 08:13:47 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 hours                     | Wed Mar 15 08:14:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 min                       | Wed Mar 15 08:15:01 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 14 secs ago                 | Wed Mar 15 12:13:49 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 min                       | Wed Mar 15 12:15:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 14 secs ago                 | Wed Mar 15 13:13:48 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 hours                     | Wed Mar 15 13:14:01 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 min                       | Wed Mar 15 13:15:02 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 hours                     | Wed Mar 15 17:14:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 hours                     | Wed Mar 15 18:14:02 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 04:17:09 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 05:17:08 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 10:17:05 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 14:17:07 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Mar 16 15:17:06 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 10 days                     | Sat Mar 25 02:14:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 10 days                     | Sat Mar 25 03:14:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 10 days                     | Sat Mar 25 08:14:01 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 10 days                     | Sat Mar 25 12:14:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 10 days                     | Sat Mar 25 13:14:02 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 3 mons                      | Fri Mar 31 17:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 3 mons                      | Sat Apr 01 17:32:01 2000 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons                      | Wed May 31 17:32:01 2000 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours             | Thu Jun 01 05:32:01 2000 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons                      | Thu Jun 01 17:32:01 2000 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours             | Fri Jun 02 05:32:01 2000 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 3 mons                      | Thu Jun 15 02:14:05 2000 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 3 mons                      | Thu Jun 15 03:14:04 2000 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 3 mons                      | Thu Jun 15 08:14:01 2000 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 3 mons                      | Thu Jun 15 12:14:03 2000 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 3 mons                      | Thu Jun 15 13:14:02 2000 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons                      | Tue Aug 15 02:14:05 2000 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons                      | Tue Aug 15 03:14:04 2000 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons                      | Tue Aug 15 08:14:01 2000 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons                      | Tue Aug 15 12:14:03 2000 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons                      | Tue Aug 15 13:14:02 2000 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons 12 hours             | Tue Aug 15 14:14:05 2000 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours             | Tue Aug 15 15:14:04 2000 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours             | Tue Aug 15 20:14:01 2000 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons 12 hours             | Wed Aug 16 00:14:03 2000 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons 12 hours             | Wed Aug 16 01:14:02 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 14 secs ago                 | Sun Dec 31 17:31:47 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 min                       | Sun Dec 31 17:33:01 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 hours                     | Sun Dec 31 22:32:01 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 14 secs ago                 | Mon Jan 01 17:31:47 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 min                       | Mon Jan 01 17:33:01 2001 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Mon Jan 01 19:35:05 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 hours                     | Mon Jan 01 22:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Jan 02 19:35:05 2001 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 10 days                     | Wed Jan 10 17:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 10 days                     | Thu Jan 11 17:32:01 2001 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 3 mons                      | Sat Mar 31 17:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 3 mons                      | Sun Apr 01 17:32:01 2001 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons                      | Thu May 31 17:32:01 2001 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours             | Fri Jun 01 05:32:01 2001 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons                      | Fri Jun 01 17:32:01 2001 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours             | Sat Jun 02 05:32:01 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 14 secs ago                 | Sat Sep 22 18:19:06 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 min                       | Sat Sep 22 18:20:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 hours                     | Sat Sep 22 23:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 day 2 hours 3 mins 4 secs | Sun Sep 23 20:22:24 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 10 days                     | Tue Oct 02 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 3 mons                      | Sat Dec 22 18:19:20 2001 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons                      | Fri Feb 22 18:19:20 2002 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons 12 hours             | Sat Feb 23 06:19:20 2002 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 6 years                     | Thu Feb 28 17:32:01 2002 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 6 years                     | Thu Feb 28 17:32:01 2002 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 6 years                     | Fri Mar 01 17:32:01 2002 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 6 years                     | Mon Dec 30 17:32:01 2002 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 6 years                     | Tue Dec 31 17:32:01 2002 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 34 years                    | Thu Jan 01 00:00:00 2004 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 6 years                     | Sat Dec 31 17:32:01 2005 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 6 years                     | Sun Jan 01 17:32:01 2006 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 6 years                     | Wed Mar 15 02:14:05 2006 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 6 years                     | Wed Mar 15 03:14:04 2006 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 6 years                     | Wed Mar 15 08:14:01 2006 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 6 years                     | Wed Mar 15 12:14:03 2006 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 6 years                     | Wed Mar 15 13:14:02 2006 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 6 years                     | Sun Dec 31 17:32:01 2006 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 6 years                     | Mon Jan 01 17:32:01 2007 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 6 years                     | Sat Sep 22 18:19:20 2007 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 34 years                    | Thu Feb 28 17:32:01 2030 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 34 years                    | Thu Feb 28 17:32:01 2030 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 34 years                    | Fri Mar 01 17:32:01 2030 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 34 years                    | Mon Dec 30 17:32:01 2030 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 34 years                    | Tue Dec 31 17:32:01 2030 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 34 years                    | Sat Dec 31 17:32:01 2033 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 34 years                    | Sun Jan 01 17:32:01 2034 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 34 years                    | Wed Mar 15 02:14:05 2034 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 34 years                    | Wed Mar 15 03:14:04 2034 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 34 years                    | Wed Mar 15 08:14:01 2034 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 34 years                    | Wed Mar 15 12:14:03 2034 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 34 years                    | Wed Mar 15 13:14:02 2034 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 34 years                    | Sun Dec 31 17:32:01 2034 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 34 years                    | Mon Jan 01 17:32:01 2035 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 34 years                    | Sat Sep 22 18:19:20 2035 PDT
+ 160 |          timestamp           |    interval     |             plus             
+-----+------------------------------+-----------------+------------------------------
+     | Thu Jan 01 00:00:00 1970 -05 | -00:00:14       | Wed Dec 31 23:59:46 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 00:01:00        | Thu Jan 01 00:01:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 05:00:00        | Thu Jan 01 05:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 1 day 02:03:04  | Fri Jan 02 02:03:04 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 10 days         | Sun Jan 11 00:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 3 mons          | Wed Apr 01 00:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons          | Mon Jun 01 00:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons 12:00:00 | Mon Jun 01 12:00:00 1970 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 6 years         | Thu Jan 01 00:00:00 1976 -05
+     | Wed Feb 28 17:32:01 1996 -05 | -00:00:14       | Wed Feb 28 17:31:47 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 00:01:00        | Wed Feb 28 17:33:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 05:00:00        | Wed Feb 28 22:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | -00:00:14       | Thu Feb 29 17:31:47 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 00:01:00        | Thu Feb 29 17:33:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 1 day 02:03:04  | Thu Feb 29 19:35:05 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 05:00:00        | Thu Feb 29 22:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | -00:00:14       | Fri Mar 01 17:31:47 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 00:01:00        | Fri Mar 01 17:33:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 1 day 02:03:04  | Fri Mar 01 19:35:05 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 05:00:00        | Fri Mar 01 22:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 1 day 02:03:04  | Sat Mar 02 19:35:05 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 10 days         | Sat Mar 09 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 10 days         | Sun Mar 10 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 10 days         | Mon Mar 11 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 3 mons          | Tue May 28 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 3 mons          | Wed May 29 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 3 mons          | Sat Jun 01 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons          | Sun Jul 28 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons 12:00:00 | Mon Jul 29 05:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons          | Mon Jul 29 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons 12:00:00 | Tue Jul 30 05:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons          | Thu Aug 01 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons 12:00:00 | Fri Aug 02 05:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | -00:00:14       | Mon Dec 30 17:31:47 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 00:01:00        | Mon Dec 30 17:33:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 05:00:00        | Mon Dec 30 22:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | -00:00:14       | Tue Dec 31 17:31:47 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 00:01:00        | Tue Dec 31 17:33:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 1 day 02:03:04  | Tue Dec 31 19:35:05 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 05:00:00        | Tue Dec 31 22:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 1 day 02:03:04  | Wed Jan 01 19:35:05 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 10 days         | Thu Jan 09 17:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 10 days         | Fri Jan 10 17:32:01 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 3 mons          | Sun Mar 30 17:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 3 mons          | Mon Mar 31 17:32:01 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons          | Fri May 30 17:32:01 1997 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons 12:00:00 | Sat May 31 05:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons          | Sat May 31 17:32:01 1997 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons 12:00:00 | Sun Jun 01 05:32:01 1997 -05
+     | Fri Dec 31 17:32:01 1999 -05 | -00:00:14       | Fri Dec 31 17:31:47 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 00:01:00        | Fri Dec 31 17:33:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 05:00:00        | Fri Dec 31 22:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | -00:00:14       | Sat Jan 01 17:31:47 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 00:01:00        | Sat Jan 01 17:33:01 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 1 day 02:03:04  | Sat Jan 01 19:35:05 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 05:00:00        | Sat Jan 01 22:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 1 day 02:03:04  | Sun Jan 02 19:35:05 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 10 days         | Mon Jan 10 17:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 10 days         | Tue Jan 11 17:32:01 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | -00:00:14       | Wed Mar 15 02:13:51 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 00:01:00        | Wed Mar 15 02:15:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | -00:00:14       | Wed Mar 15 03:13:50 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 00:01:00        | Wed Mar 15 03:15:04 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 05:00:00        | Wed Mar 15 07:14:05 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | -00:00:14       | Wed Mar 15 08:13:47 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 05:00:00        | Wed Mar 15 08:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 00:01:00        | Wed Mar 15 08:15:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | -00:00:14       | Wed Mar 15 12:13:49 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 00:01:00        | Wed Mar 15 12:15:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | -00:00:14       | Wed Mar 15 13:13:48 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 05:00:00        | Wed Mar 15 13:14:01 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 00:01:00        | Wed Mar 15 13:15:02 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 05:00:00        | Wed Mar 15 17:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 05:00:00        | Wed Mar 15 18:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 1 day 02:03:04  | Thu Mar 16 04:17:09 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 1 day 02:03:04  | Thu Mar 16 05:17:08 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 1 day 02:03:04  | Thu Mar 16 10:17:05 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 1 day 02:03:04  | Thu Mar 16 14:17:07 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 1 day 02:03:04  | Thu Mar 16 15:17:06 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 10 days         | Sat Mar 25 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 10 days         | Sat Mar 25 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 10 days         | Sat Mar 25 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 10 days         | Sat Mar 25 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 10 days         | Sat Mar 25 13:14:02 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 3 mons          | Fri Mar 31 17:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 3 mons          | Sat Apr 01 17:32:01 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons          | Wed May 31 17:32:01 2000 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons 12:00:00 | Thu Jun 01 05:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons          | Thu Jun 01 17:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons 12:00:00 | Fri Jun 02 05:32:01 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 3 mons          | Thu Jun 15 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 3 mons          | Thu Jun 15 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 3 mons          | Thu Jun 15 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 3 mons          | Thu Jun 15 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 3 mons          | Thu Jun 15 13:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons          | Tue Aug 15 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons          | Tue Aug 15 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons          | Tue Aug 15 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons          | Tue Aug 15 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons          | Tue Aug 15 13:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons 12:00:00 | Tue Aug 15 14:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons 12:00:00 | Tue Aug 15 15:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons 12:00:00 | Tue Aug 15 20:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons 12:00:00 | Wed Aug 16 00:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons 12:00:00 | Wed Aug 16 01:14:02 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | -00:00:14       | Sun Dec 31 17:31:47 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 00:01:00        | Sun Dec 31 17:33:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 05:00:00        | Sun Dec 31 22:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | -00:00:14       | Mon Jan 01 17:31:47 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 00:01:00        | Mon Jan 01 17:33:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 1 day 02:03:04  | Mon Jan 01 19:35:05 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 05:00:00        | Mon Jan 01 22:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 1 day 02:03:04  | Tue Jan 02 19:35:05 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 10 days         | Wed Jan 10 17:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 10 days         | Thu Jan 11 17:32:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 3 mons          | Sat Mar 31 17:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 3 mons          | Sun Apr 01 17:32:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons          | Thu May 31 17:32:01 2001 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons 12:00:00 | Fri Jun 01 05:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons          | Fri Jun 01 17:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons 12:00:00 | Sat Jun 02 05:32:01 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | -00:00:14       | Sat Sep 22 18:19:06 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 00:01:00        | Sat Sep 22 18:20:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 05:00:00        | Sat Sep 22 23:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 1 day 02:03:04  | Sun Sep 23 20:22:24 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 10 days         | Tue Oct 02 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 3 mons          | Sat Dec 22 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons          | Fri Feb 22 18:19:20 2002 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons 12:00:00 | Sat Feb 23 06:19:20 2002 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 6 years         | Thu Feb 28 17:32:01 2002 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 6 years         | Thu Feb 28 17:32:01 2002 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 6 years         | Fri Mar 01 17:32:01 2002 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 6 years         | Mon Dec 30 17:32:01 2002 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 6 years         | Tue Dec 31 17:32:01 2002 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 34 years        | Thu Jan 01 00:00:00 2004 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 6 years         | Sat Dec 31 17:32:01 2005 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 6 years         | Sun Jan 01 17:32:01 2006 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 6 years         | Wed Mar 15 02:14:05 2006 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 6 years         | Wed Mar 15 03:14:04 2006 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 6 years         | Wed Mar 15 08:14:01 2006 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 6 years         | Wed Mar 15 12:14:03 2006 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 6 years         | Wed Mar 15 13:14:02 2006 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 6 years         | Sun Dec 31 17:32:01 2006 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 6 years         | Mon Jan 01 17:32:01 2007 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 6 years         | Sat Sep 22 18:19:20 2007 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 34 years        | Thu Feb 28 17:32:01 2030 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 34 years        | Thu Feb 28 17:32:01 2030 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 34 years        | Fri Mar 01 17:32:01 2030 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 34 years        | Mon Dec 30 17:32:01 2030 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 34 years        | Tue Dec 31 17:32:01 2030 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 34 years        | Sat Dec 31 17:32:01 2033 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 34 years        | Sun Jan 01 17:32:01 2034 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 34 years        | Wed Mar 15 02:14:05 2034 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 34 years        | Wed Mar 15 03:14:04 2034 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 34 years        | Wed Mar 15 08:14:01 2034 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 34 years        | Wed Mar 15 12:14:03 2034 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 34 years        | Wed Mar 15 13:14:02 2034 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 34 years        | Sun Dec 31 17:32:01 2034 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 34 years        | Mon Jan 01 17:32:01 2035 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 34 years        | Sat Sep 22 18:19:20 2035 -05
 (160 rows)
 
 SELECT '' AS "160", d.f1 AS "timestamp", t.f1 AS "interval", d.f1 - t.f1 AS minus
   FROM TEMP_TIMESTAMP d, INTERVAL_TBL t
   WHERE isfinite(d.f1)
   ORDER BY minus, "timestamp", "interval";
- 160 |          timestamp           |           interval            |            minus             
------+------------------------------+-------------------------------+------------------------------
-     | Thu Jan 01 00:00:00 1970 PST | @ 34 years                    | Wed Jan 01 00:00:00 1936 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 34 years                    | Wed Feb 28 17:32:01 1962 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 34 years                    | Wed Feb 28 17:32:01 1962 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 34 years                    | Thu Mar 01 17:32:01 1962 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 34 years                    | Sun Dec 30 17:32:01 1962 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 34 years                    | Mon Dec 31 17:32:01 1962 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 6 years                     | Wed Jan 01 00:00:00 1964 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 34 years                    | Fri Dec 31 17:32:01 1965 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 34 years                    | Sat Jan 01 17:32:01 1966 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 34 years                    | Tue Mar 15 02:14:05 1966 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 34 years                    | Tue Mar 15 03:14:04 1966 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 34 years                    | Tue Mar 15 08:14:01 1966 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 34 years                    | Tue Mar 15 12:14:03 1966 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 34 years                    | Tue Mar 15 13:14:02 1966 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 34 years                    | Sat Dec 31 17:32:01 1966 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 34 years                    | Sun Jan 01 17:32:01 1967 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 34 years                    | Fri Sep 22 18:19:20 1967 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons 12 hours             | Thu Jul 31 12:00:00 1969 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 mons                      | Fri Aug 01 00:00:00 1969 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 3 mons                      | Wed Oct 01 00:00:00 1969 PDT
-     | Thu Jan 01 00:00:00 1970 PST | @ 10 days                     | Mon Dec 22 00:00:00 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Dec 30 21:56:56 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 5 hours                     | Wed Dec 31 19:00:00 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 1 min                       | Wed Dec 31 23:59:00 1969 PST
-     | Thu Jan 01 00:00:00 1970 PST | @ 14 secs ago                 | Thu Jan 01 00:00:14 1970 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 6 years                     | Wed Feb 28 17:32:01 1990 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 6 years                     | Wed Feb 28 17:32:01 1990 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 6 years                     | Thu Mar 01 17:32:01 1990 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 6 years                     | Sun Dec 30 17:32:01 1990 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 6 years                     | Mon Dec 31 17:32:01 1990 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 6 years                     | Fri Dec 31 17:32:01 1993 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 6 years                     | Sat Jan 01 17:32:01 1994 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 6 years                     | Tue Mar 15 02:14:05 1994 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 6 years                     | Tue Mar 15 03:14:04 1994 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 6 years                     | Tue Mar 15 08:14:01 1994 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 6 years                     | Tue Mar 15 12:14:03 1994 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 6 years                     | Tue Mar 15 13:14:02 1994 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 6 years                     | Sat Dec 31 17:32:01 1994 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 6 years                     | Sun Jan 01 17:32:01 1995 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 6 years                     | Fri Sep 22 18:19:20 1995 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons 12 hours             | Thu Sep 28 05:32:01 1995 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 mons                      | Thu Sep 28 17:32:01 1995 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons 12 hours             | Fri Sep 29 05:32:01 1995 PDT
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 mons                      | Fri Sep 29 17:32:01 1995 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons 12 hours             | Sun Oct 01 05:32:01 1995 PDT
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 mons                      | Sun Oct 01 17:32:01 1995 PDT
-     | Wed Feb 28 17:32:01 1996 PST | @ 3 mons                      | Tue Nov 28 17:32:01 1995 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 3 mons                      | Wed Nov 29 17:32:01 1995 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 3 mons                      | Fri Dec 01 17:32:01 1995 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 10 days                     | Sun Feb 18 17:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 10 days                     | Mon Feb 19 17:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 10 days                     | Tue Feb 20 17:32:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Feb 27 15:28:57 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 5 hours                     | Wed Feb 28 12:32:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Wed Feb 28 15:28:57 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 1 min                       | Wed Feb 28 17:31:01 1996 PST
-     | Wed Feb 28 17:32:01 1996 PST | @ 14 secs ago                 | Wed Feb 28 17:32:15 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 5 hours                     | Thu Feb 29 12:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Feb 29 15:28:57 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 1 min                       | Thu Feb 29 17:31:01 1996 PST
-     | Thu Feb 29 17:32:01 1996 PST | @ 14 secs ago                 | Thu Feb 29 17:32:15 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 5 hours                     | Fri Mar 01 12:32:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 1 min                       | Fri Mar 01 17:31:01 1996 PST
-     | Fri Mar 01 17:32:01 1996 PST | @ 14 secs ago                 | Fri Mar 01 17:32:15 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons 12 hours             | Tue Jul 30 05:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 mons                      | Tue Jul 30 17:32:01 1996 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons 12 hours             | Wed Jul 31 05:32:01 1996 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 mons                      | Wed Jul 31 17:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 3 mons                      | Mon Sep 30 17:32:01 1996 PDT
-     | Tue Dec 31 17:32:01 1996 PST | @ 3 mons                      | Mon Sep 30 17:32:01 1996 PDT
-     | Mon Dec 30 17:32:01 1996 PST | @ 10 days                     | Fri Dec 20 17:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 10 days                     | Sat Dec 21 17:32:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Dec 29 15:28:57 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 5 hours                     | Mon Dec 30 12:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 day 2 hours 3 mins 4 secs | Mon Dec 30 15:28:57 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 1 min                       | Mon Dec 30 17:31:01 1996 PST
-     | Mon Dec 30 17:32:01 1996 PST | @ 14 secs ago                 | Mon Dec 30 17:32:15 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 5 hours                     | Tue Dec 31 12:32:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 1 min                       | Tue Dec 31 17:31:01 1996 PST
-     | Tue Dec 31 17:32:01 1996 PST | @ 14 secs ago                 | Tue Dec 31 17:32:15 1996 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons 12 hours             | Sat Jul 31 05:32:01 1999 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 mons                      | Sat Jul 31 17:32:01 1999 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons 12 hours             | Sun Aug 01 05:32:01 1999 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 mons                      | Sun Aug 01 17:32:01 1999 PDT
-     | Fri Dec 31 17:32:01 1999 PST | @ 3 mons                      | Thu Sep 30 17:32:01 1999 PDT
-     | Sat Jan 01 17:32:01 2000 PST | @ 3 mons                      | Fri Oct 01 17:32:01 1999 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons 12 hours             | Thu Oct 14 14:14:05 1999 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons 12 hours             | Thu Oct 14 15:14:04 1999 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons 12 hours             | Thu Oct 14 20:14:01 1999 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons 12 hours             | Fri Oct 15 00:14:03 1999 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons 12 hours             | Fri Oct 15 01:14:02 1999 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 mons                      | Fri Oct 15 02:14:05 1999 PDT
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 mons                      | Fri Oct 15 03:14:04 1999 PDT
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 mons                      | Fri Oct 15 08:14:01 1999 PDT
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 mons                      | Fri Oct 15 12:14:03 1999 PDT
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 mons                      | Fri Oct 15 13:14:02 1999 PDT
-     | Wed Mar 15 02:14:05 2000 PST | @ 3 mons                      | Wed Dec 15 02:14:05 1999 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 3 mons                      | Wed Dec 15 03:14:04 1999 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 3 mons                      | Wed Dec 15 08:14:01 1999 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 3 mons                      | Wed Dec 15 12:14:03 1999 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 3 mons                      | Wed Dec 15 13:14:02 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 10 days                     | Tue Dec 21 17:32:01 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 10 days                     | Wed Dec 22 17:32:01 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 day 2 hours 3 mins 4 secs | Thu Dec 30 15:28:57 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 5 hours                     | Fri Dec 31 12:32:01 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Fri Dec 31 15:28:57 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 1 min                       | Fri Dec 31 17:31:01 1999 PST
-     | Fri Dec 31 17:32:01 1999 PST | @ 14 secs ago                 | Fri Dec 31 17:32:15 1999 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 5 hours                     | Sat Jan 01 12:32:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 1 min                       | Sat Jan 01 17:31:01 2000 PST
-     | Sat Jan 01 17:32:01 2000 PST | @ 14 secs ago                 | Sat Jan 01 17:32:15 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 10 days                     | Sun Mar 05 02:14:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 10 days                     | Sun Mar 05 03:14:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 10 days                     | Sun Mar 05 08:14:01 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 10 days                     | Sun Mar 05 12:14:03 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 10 days                     | Sun Mar 05 13:14:02 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 00:11:01 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 01:11:00 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 06:10:57 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 10:10:59 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Tue Mar 14 11:10:58 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 5 hours                     | Tue Mar 14 21:14:05 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 5 hours                     | Tue Mar 14 22:14:04 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 1 min                       | Wed Mar 15 02:13:05 2000 PST
-     | Wed Mar 15 02:14:05 2000 PST | @ 14 secs ago                 | Wed Mar 15 02:14:19 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 1 min                       | Wed Mar 15 03:13:04 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 5 hours                     | Wed Mar 15 03:14:01 2000 PST
-     | Wed Mar 15 03:14:04 2000 PST | @ 14 secs ago                 | Wed Mar 15 03:14:18 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 5 hours                     | Wed Mar 15 07:14:03 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 1 min                       | Wed Mar 15 08:13:01 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 5 hours                     | Wed Mar 15 08:14:02 2000 PST
-     | Wed Mar 15 08:14:01 2000 PST | @ 14 secs ago                 | Wed Mar 15 08:14:15 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 1 min                       | Wed Mar 15 12:13:03 2000 PST
-     | Wed Mar 15 12:14:03 2000 PST | @ 14 secs ago                 | Wed Mar 15 12:14:17 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 1 min                       | Wed Mar 15 13:13:02 2000 PST
-     | Wed Mar 15 13:14:02 2000 PST | @ 14 secs ago                 | Wed Mar 15 13:14:16 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons 12 hours             | Mon Jul 31 05:32:01 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 mons                      | Mon Jul 31 17:32:01 2000 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons 12 hours             | Tue Aug 01 05:32:01 2000 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 mons                      | Tue Aug 01 17:32:01 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 3 mons                      | Sat Sep 30 17:32:01 2000 PDT
-     | Mon Jan 01 17:32:01 2001 PST | @ 3 mons                      | Sun Oct 01 17:32:01 2000 PDT
-     | Sun Dec 31 17:32:01 2000 PST | @ 10 days                     | Thu Dec 21 17:32:01 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 10 days                     | Fri Dec 22 17:32:01 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 day 2 hours 3 mins 4 secs | Sat Dec 30 15:28:57 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 5 hours                     | Sun Dec 31 12:32:01 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 day 2 hours 3 mins 4 secs | Sun Dec 31 15:28:57 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 1 min                       | Sun Dec 31 17:31:01 2000 PST
-     | Sun Dec 31 17:32:01 2000 PST | @ 14 secs ago                 | Sun Dec 31 17:32:15 2000 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 5 hours                     | Mon Jan 01 12:32:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 1 min                       | Mon Jan 01 17:31:01 2001 PST
-     | Mon Jan 01 17:32:01 2001 PST | @ 14 secs ago                 | Mon Jan 01 17:32:15 2001 PST
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons 12 hours             | Sun Apr 22 06:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 mons                      | Sun Apr 22 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 3 mons                      | Fri Jun 22 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 10 days                     | Wed Sep 12 18:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 day 2 hours 3 mins 4 secs | Fri Sep 21 16:16:16 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 5 hours                     | Sat Sep 22 13:19:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 1 min                       | Sat Sep 22 18:18:20 2001 PDT
-     | Sat Sep 22 18:19:20 2001 PDT | @ 14 secs ago                 | Sat Sep 22 18:19:34 2001 PDT
+ 160 |          timestamp           |    interval     |            minus             
+-----+------------------------------+-----------------+------------------------------
+     | Thu Jan 01 00:00:00 1970 -05 | 34 years        | Wed Jan 01 00:00:00 1936 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 34 years        | Wed Feb 28 17:32:01 1962 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 34 years        | Wed Feb 28 17:32:01 1962 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 34 years        | Thu Mar 01 17:32:01 1962 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 34 years        | Sun Dec 30 17:32:01 1962 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 34 years        | Mon Dec 31 17:32:01 1962 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 6 years         | Wed Jan 01 00:00:00 1964 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 34 years        | Fri Dec 31 17:32:01 1965 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 34 years        | Sat Jan 01 17:32:01 1966 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 34 years        | Tue Mar 15 02:14:05 1966 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 34 years        | Tue Mar 15 03:14:04 1966 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 34 years        | Tue Mar 15 08:14:01 1966 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 34 years        | Tue Mar 15 12:14:03 1966 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 34 years        | Tue Mar 15 13:14:02 1966 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 34 years        | Sat Dec 31 17:32:01 1966 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 34 years        | Sun Jan 01 17:32:01 1967 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 34 years        | Fri Sep 22 18:19:20 1967 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons 12:00:00 | Thu Jul 31 12:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 5 mons          | Fri Aug 01 00:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 3 mons          | Wed Oct 01 00:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 10 days         | Mon Dec 22 00:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 1 day 02:03:04  | Tue Dec 30 21:56:56 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 05:00:00        | Wed Dec 31 19:00:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | 00:01:00        | Wed Dec 31 23:59:00 1969 -05
+     | Thu Jan 01 00:00:00 1970 -05 | -00:00:14       | Thu Jan 01 00:00:14 1970 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 6 years         | Wed Feb 28 17:32:01 1990 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 6 years         | Wed Feb 28 17:32:01 1990 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 6 years         | Thu Mar 01 17:32:01 1990 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 6 years         | Sun Dec 30 17:32:01 1990 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 6 years         | Mon Dec 31 17:32:01 1990 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 6 years         | Fri Dec 31 17:32:01 1993 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 6 years         | Sat Jan 01 17:32:01 1994 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 6 years         | Tue Mar 15 02:14:05 1994 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 6 years         | Tue Mar 15 03:14:04 1994 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 6 years         | Tue Mar 15 08:14:01 1994 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 6 years         | Tue Mar 15 12:14:03 1994 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 6 years         | Tue Mar 15 13:14:02 1994 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 6 years         | Sat Dec 31 17:32:01 1994 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 6 years         | Sun Jan 01 17:32:01 1995 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 6 years         | Fri Sep 22 18:19:20 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons 12:00:00 | Thu Sep 28 05:32:01 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 5 mons          | Thu Sep 28 17:32:01 1995 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons 12:00:00 | Fri Sep 29 05:32:01 1995 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 5 mons          | Fri Sep 29 17:32:01 1995 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons 12:00:00 | Sun Oct 01 05:32:01 1995 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 5 mons          | Sun Oct 01 17:32:01 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 3 mons          | Tue Nov 28 17:32:01 1995 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 3 mons          | Wed Nov 29 17:32:01 1995 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 3 mons          | Fri Dec 01 17:32:01 1995 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 10 days         | Sun Feb 18 17:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 10 days         | Mon Feb 19 17:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 10 days         | Tue Feb 20 17:32:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 1 day 02:03:04  | Tue Feb 27 15:28:57 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 05:00:00        | Wed Feb 28 12:32:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 1 day 02:03:04  | Wed Feb 28 15:28:57 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | 00:01:00        | Wed Feb 28 17:31:01 1996 -05
+     | Wed Feb 28 17:32:01 1996 -05 | -00:00:14       | Wed Feb 28 17:32:15 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 05:00:00        | Thu Feb 29 12:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 1 day 02:03:04  | Thu Feb 29 15:28:57 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | 00:01:00        | Thu Feb 29 17:31:01 1996 -05
+     | Thu Feb 29 17:32:01 1996 -05 | -00:00:14       | Thu Feb 29 17:32:15 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 05:00:00        | Fri Mar 01 12:32:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | 00:01:00        | Fri Mar 01 17:31:01 1996 -05
+     | Fri Mar 01 17:32:01 1996 -05 | -00:00:14       | Fri Mar 01 17:32:15 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons 12:00:00 | Tue Jul 30 05:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 5 mons          | Tue Jul 30 17:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons 12:00:00 | Wed Jul 31 05:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 5 mons          | Wed Jul 31 17:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 3 mons          | Mon Sep 30 17:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 3 mons          | Mon Sep 30 17:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 10 days         | Fri Dec 20 17:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 10 days         | Sat Dec 21 17:32:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 1 day 02:03:04  | Sun Dec 29 15:28:57 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 05:00:00        | Mon Dec 30 12:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 1 day 02:03:04  | Mon Dec 30 15:28:57 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | 00:01:00        | Mon Dec 30 17:31:01 1996 -05
+     | Mon Dec 30 17:32:01 1996 -05 | -00:00:14       | Mon Dec 30 17:32:15 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 05:00:00        | Tue Dec 31 12:32:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | 00:01:00        | Tue Dec 31 17:31:01 1996 -05
+     | Tue Dec 31 17:32:01 1996 -05 | -00:00:14       | Tue Dec 31 17:32:15 1996 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons 12:00:00 | Sat Jul 31 05:32:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 5 mons          | Sat Jul 31 17:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons 12:00:00 | Sun Aug 01 05:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 5 mons          | Sun Aug 01 17:32:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 3 mons          | Thu Sep 30 17:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 3 mons          | Fri Oct 01 17:32:01 1999 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons 12:00:00 | Thu Oct 14 14:14:05 1999 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons 12:00:00 | Thu Oct 14 15:14:04 1999 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons 12:00:00 | Thu Oct 14 20:14:01 1999 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons 12:00:00 | Fri Oct 15 00:14:03 1999 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons 12:00:00 | Fri Oct 15 01:14:02 1999 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 5 mons          | Fri Oct 15 02:14:05 1999 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 5 mons          | Fri Oct 15 03:14:04 1999 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 5 mons          | Fri Oct 15 08:14:01 1999 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 5 mons          | Fri Oct 15 12:14:03 1999 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 5 mons          | Fri Oct 15 13:14:02 1999 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 3 mons          | Wed Dec 15 02:14:05 1999 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 3 mons          | Wed Dec 15 03:14:04 1999 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 3 mons          | Wed Dec 15 08:14:01 1999 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 3 mons          | Wed Dec 15 12:14:03 1999 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 3 mons          | Wed Dec 15 13:14:02 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 10 days         | Tue Dec 21 17:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 10 days         | Wed Dec 22 17:32:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 1 day 02:03:04  | Thu Dec 30 15:28:57 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 05:00:00        | Fri Dec 31 12:32:01 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 1 day 02:03:04  | Fri Dec 31 15:28:57 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | 00:01:00        | Fri Dec 31 17:31:01 1999 -05
+     | Fri Dec 31 17:32:01 1999 -05 | -00:00:14       | Fri Dec 31 17:32:15 1999 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 05:00:00        | Sat Jan 01 12:32:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | 00:01:00        | Sat Jan 01 17:31:01 2000 -05
+     | Sat Jan 01 17:32:01 2000 -05 | -00:00:14       | Sat Jan 01 17:32:15 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 10 days         | Sun Mar 05 02:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 10 days         | Sun Mar 05 03:14:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 10 days         | Sun Mar 05 08:14:01 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 10 days         | Sun Mar 05 12:14:03 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 10 days         | Sun Mar 05 13:14:02 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 1 day 02:03:04  | Tue Mar 14 00:11:01 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 1 day 02:03:04  | Tue Mar 14 01:11:00 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 1 day 02:03:04  | Tue Mar 14 06:10:57 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 1 day 02:03:04  | Tue Mar 14 10:10:59 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 1 day 02:03:04  | Tue Mar 14 11:10:58 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 05:00:00        | Tue Mar 14 21:14:05 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 05:00:00        | Tue Mar 14 22:14:04 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | 00:01:00        | Wed Mar 15 02:13:05 2000 -05
+     | Wed Mar 15 02:14:05 2000 -05 | -00:00:14       | Wed Mar 15 02:14:19 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | 00:01:00        | Wed Mar 15 03:13:04 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 05:00:00        | Wed Mar 15 03:14:01 2000 -05
+     | Wed Mar 15 03:14:04 2000 -05 | -00:00:14       | Wed Mar 15 03:14:18 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 05:00:00        | Wed Mar 15 07:14:03 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | 00:01:00        | Wed Mar 15 08:13:01 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 05:00:00        | Wed Mar 15 08:14:02 2000 -05
+     | Wed Mar 15 08:14:01 2000 -05 | -00:00:14       | Wed Mar 15 08:14:15 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | 00:01:00        | Wed Mar 15 12:13:03 2000 -05
+     | Wed Mar 15 12:14:03 2000 -05 | -00:00:14       | Wed Mar 15 12:14:17 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | 00:01:00        | Wed Mar 15 13:13:02 2000 -05
+     | Wed Mar 15 13:14:02 2000 -05 | -00:00:14       | Wed Mar 15 13:14:16 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons 12:00:00 | Mon Jul 31 05:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 5 mons          | Mon Jul 31 17:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons 12:00:00 | Tue Aug 01 05:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 5 mons          | Tue Aug 01 17:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 3 mons          | Sat Sep 30 17:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 3 mons          | Sun Oct 01 17:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 10 days         | Thu Dec 21 17:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 10 days         | Fri Dec 22 17:32:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 1 day 02:03:04  | Sat Dec 30 15:28:57 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 05:00:00        | Sun Dec 31 12:32:01 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 1 day 02:03:04  | Sun Dec 31 15:28:57 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | 00:01:00        | Sun Dec 31 17:31:01 2000 -05
+     | Sun Dec 31 17:32:01 2000 -05 | -00:00:14       | Sun Dec 31 17:32:15 2000 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 05:00:00        | Mon Jan 01 12:32:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | 00:01:00        | Mon Jan 01 17:31:01 2001 -05
+     | Mon Jan 01 17:32:01 2001 -05 | -00:00:14       | Mon Jan 01 17:32:15 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons 12:00:00 | Sun Apr 22 06:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 5 mons          | Sun Apr 22 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 3 mons          | Fri Jun 22 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 10 days         | Wed Sep 12 18:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 1 day 02:03:04  | Fri Sep 21 16:16:16 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 05:00:00        | Sat Sep 22 13:19:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | 00:01:00        | Sat Sep 22 18:18:20 2001 -05
+     | Sat Sep 22 18:19:20 2001 -05 | -00:00:14       | Sat Sep 22 18:19:34 2001 -05
 (160 rows)
 
 SELECT '' AS "16", d.f1 AS "timestamp",
@@ -1763,287 +1761,287 @@
    d.f1 - timestamp with time zone '1980-01-06 00:00 GMT' AS difference
   FROM TEMP_TIMESTAMP d
   ORDER BY difference;
- 16 |          timestamp           |         gpstime_zero         |             difference              
-----+------------------------------+------------------------------+-------------------------------------
-    | Thu Jan 01 00:00:00 1970 PST | Sat Jan 05 16:00:00 1980 PST | @ 3656 days 16 hours ago
-    | Wed Feb 28 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 5898 days 1 hour 32 mins 1 sec
-    | Thu Feb 29 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 5899 days 1 hour 32 mins 1 sec
-    | Fri Mar 01 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 5900 days 1 hour 32 mins 1 sec
-    | Mon Dec 30 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 6204 days 1 hour 32 mins 1 sec
-    | Tue Dec 31 17:32:01 1996 PST | Sat Jan 05 16:00:00 1980 PST | @ 6205 days 1 hour 32 mins 1 sec
-    | Fri Dec 31 17:32:01 1999 PST | Sat Jan 05 16:00:00 1980 PST | @ 7300 days 1 hour 32 mins 1 sec
-    | Sat Jan 01 17:32:01 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7301 days 1 hour 32 mins 1 sec
-    | Wed Mar 15 02:14:05 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 10 hours 14 mins 5 secs
-    | Wed Mar 15 03:14:04 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 11 hours 14 mins 4 secs
-    | Wed Mar 15 08:14:01 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 16 hours 14 mins 1 sec
-    | Wed Mar 15 12:14:03 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 20 hours 14 mins 3 secs
-    | Wed Mar 15 13:14:02 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7374 days 21 hours 14 mins 2 secs
-    | Sun Dec 31 17:32:01 2000 PST | Sat Jan 05 16:00:00 1980 PST | @ 7666 days 1 hour 32 mins 1 sec
-    | Mon Jan 01 17:32:01 2001 PST | Sat Jan 05 16:00:00 1980 PST | @ 7667 days 1 hour 32 mins 1 sec
-    | Sat Sep 22 18:19:20 2001 PDT | Sat Jan 05 16:00:00 1980 PST | @ 7931 days 1 hour 19 mins 20 secs
+ 16 |          timestamp           |         gpstime_zero         |      difference      
+----+------------------------------+------------------------------+----------------------
+    | Thu Jan 01 00:00:00 1970 -05 | Sat Jan 05 19:00:00 1980 -05 | -3656 days -19:00:00
+    | Wed Feb 28 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 5897 days 22:32:01
+    | Thu Feb 29 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 5898 days 22:32:01
+    | Fri Mar 01 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 5899 days 22:32:01
+    | Mon Dec 30 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 6203 days 22:32:01
+    | Tue Dec 31 17:32:01 1996 -05 | Sat Jan 05 19:00:00 1980 -05 | 6204 days 22:32:01
+    | Fri Dec 31 17:32:01 1999 -05 | Sat Jan 05 19:00:00 1980 -05 | 7299 days 22:32:01
+    | Sat Jan 01 17:32:01 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7300 days 22:32:01
+    | Wed Mar 15 02:14:05 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 07:14:05
+    | Wed Mar 15 03:14:04 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 08:14:04
+    | Wed Mar 15 08:14:01 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 13:14:01
+    | Wed Mar 15 12:14:03 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 17:14:03
+    | Wed Mar 15 13:14:02 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7374 days 18:14:02
+    | Sun Dec 31 17:32:01 2000 -05 | Sat Jan 05 19:00:00 1980 -05 | 7665 days 22:32:01
+    | Mon Jan 01 17:32:01 2001 -05 | Sat Jan 05 19:00:00 1980 -05 | 7666 days 22:32:01
+    | Sat Sep 22 18:19:20 2001 -05 | Sat Jan 05 19:00:00 1980 -05 | 7930 days 23:19:20
 (16 rows)
 
 SELECT '' AS "226", d1.f1 AS timestamp1, d2.f1 AS timestamp2, d1.f1 - d2.f1 AS difference
   FROM TEMP_TIMESTAMP d1, TEMP_TIMESTAMP d2
   ORDER BY timestamp1, timestamp2, difference;
- 226 |          timestamp1          |          timestamp2          |                difference                 
------+------------------------------+------------------------------+-------------------------------------------
-     | Thu Jan 01 00:00:00 1970 PST | Thu Jan 01 00:00:00 1970 PST | @ 0
-     | Thu Jan 01 00:00:00 1970 PST | Wed Feb 28 17:32:01 1996 PST | @ 9554 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Thu Feb 29 17:32:01 1996 PST | @ 9555 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Fri Mar 01 17:32:01 1996 PST | @ 9556 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Mon Dec 30 17:32:01 1996 PST | @ 9860 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Tue Dec 31 17:32:01 1996 PST | @ 9861 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Fri Dec 31 17:32:01 1999 PST | @ 10956 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Sat Jan 01 17:32:01 2000 PST | @ 10957 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 02:14:05 2000 PST | @ 11031 days 2 hours 14 mins 5 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 03:14:04 2000 PST | @ 11031 days 3 hours 14 mins 4 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 08:14:01 2000 PST | @ 11031 days 8 hours 14 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 12:14:03 2000 PST | @ 11031 days 12 hours 14 mins 3 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Wed Mar 15 13:14:02 2000 PST | @ 11031 days 13 hours 14 mins 2 secs ago
-     | Thu Jan 01 00:00:00 1970 PST | Sun Dec 31 17:32:01 2000 PST | @ 11322 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Mon Jan 01 17:32:01 2001 PST | @ 11323 days 17 hours 32 mins 1 sec ago
-     | Thu Jan 01 00:00:00 1970 PST | Sat Sep 22 18:19:20 2001 PDT | @ 11587 days 17 hours 19 mins 20 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9554 days 17 hours 32 mins 1 sec
-     | Wed Feb 28 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 0
-     | Wed Feb 28 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 1 day ago
-     | Wed Feb 28 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 2 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 306 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 307 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1402 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1403 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1476 days 8 hours 42 mins 4 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1476 days 9 hours 42 mins 3 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1476 days 14 hours 42 mins ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1476 days 18 hours 42 mins 2 secs ago
-     | Wed Feb 28 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1476 days 19 hours 42 mins 1 sec ago
-     | Wed Feb 28 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1768 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1769 days ago
-     | Wed Feb 28 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 2032 days 23 hours 47 mins 19 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9555 days 17 hours 32 mins 1 sec
-     | Thu Feb 29 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 1 day
-     | Thu Feb 29 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 0
-     | Thu Feb 29 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 1 day ago
-     | Thu Feb 29 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 305 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 306 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1401 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1402 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1475 days 8 hours 42 mins 4 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1475 days 9 hours 42 mins 3 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1475 days 14 hours 42 mins ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1475 days 18 hours 42 mins 2 secs ago
-     | Thu Feb 29 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1475 days 19 hours 42 mins 1 sec ago
-     | Thu Feb 29 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1767 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1768 days ago
-     | Thu Feb 29 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 2031 days 23 hours 47 mins 19 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9556 days 17 hours 32 mins 1 sec
-     | Fri Mar 01 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 2 days
-     | Fri Mar 01 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 1 day
-     | Fri Mar 01 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 0
-     | Fri Mar 01 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 304 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 305 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1400 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1401 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1474 days 8 hours 42 mins 4 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1474 days 9 hours 42 mins 3 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1474 days 14 hours 42 mins ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1474 days 18 hours 42 mins 2 secs ago
-     | Fri Mar 01 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1474 days 19 hours 42 mins 1 sec ago
-     | Fri Mar 01 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1766 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1767 days ago
-     | Fri Mar 01 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 2030 days 23 hours 47 mins 19 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9860 days 17 hours 32 mins 1 sec
-     | Mon Dec 30 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 306 days
-     | Mon Dec 30 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 305 days
-     | Mon Dec 30 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 304 days
-     | Mon Dec 30 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 0
-     | Mon Dec 30 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 1 day ago
-     | Mon Dec 30 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1096 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1097 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1170 days 8 hours 42 mins 4 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1170 days 9 hours 42 mins 3 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1170 days 14 hours 42 mins ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1170 days 18 hours 42 mins 2 secs ago
-     | Mon Dec 30 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1170 days 19 hours 42 mins 1 sec ago
-     | Mon Dec 30 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1462 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1463 days ago
-     | Mon Dec 30 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 1726 days 23 hours 47 mins 19 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Thu Jan 01 00:00:00 1970 PST | @ 9861 days 17 hours 32 mins 1 sec
-     | Tue Dec 31 17:32:01 1996 PST | Wed Feb 28 17:32:01 1996 PST | @ 307 days
-     | Tue Dec 31 17:32:01 1996 PST | Thu Feb 29 17:32:01 1996 PST | @ 306 days
-     | Tue Dec 31 17:32:01 1996 PST | Fri Mar 01 17:32:01 1996 PST | @ 305 days
-     | Tue Dec 31 17:32:01 1996 PST | Mon Dec 30 17:32:01 1996 PST | @ 1 day
-     | Tue Dec 31 17:32:01 1996 PST | Tue Dec 31 17:32:01 1996 PST | @ 0
-     | Tue Dec 31 17:32:01 1996 PST | Fri Dec 31 17:32:01 1999 PST | @ 1095 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Sat Jan 01 17:32:01 2000 PST | @ 1096 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 02:14:05 2000 PST | @ 1169 days 8 hours 42 mins 4 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 03:14:04 2000 PST | @ 1169 days 9 hours 42 mins 3 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 08:14:01 2000 PST | @ 1169 days 14 hours 42 mins ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 12:14:03 2000 PST | @ 1169 days 18 hours 42 mins 2 secs ago
-     | Tue Dec 31 17:32:01 1996 PST | Wed Mar 15 13:14:02 2000 PST | @ 1169 days 19 hours 42 mins 1 sec ago
-     | Tue Dec 31 17:32:01 1996 PST | Sun Dec 31 17:32:01 2000 PST | @ 1461 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Mon Jan 01 17:32:01 2001 PST | @ 1462 days ago
-     | Tue Dec 31 17:32:01 1996 PST | Sat Sep 22 18:19:20 2001 PDT | @ 1725 days 23 hours 47 mins 19 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Thu Jan 01 00:00:00 1970 PST | @ 10956 days 17 hours 32 mins 1 sec
-     | Fri Dec 31 17:32:01 1999 PST | Wed Feb 28 17:32:01 1996 PST | @ 1402 days
-     | Fri Dec 31 17:32:01 1999 PST | Thu Feb 29 17:32:01 1996 PST | @ 1401 days
-     | Fri Dec 31 17:32:01 1999 PST | Fri Mar 01 17:32:01 1996 PST | @ 1400 days
-     | Fri Dec 31 17:32:01 1999 PST | Mon Dec 30 17:32:01 1996 PST | @ 1096 days
-     | Fri Dec 31 17:32:01 1999 PST | Tue Dec 31 17:32:01 1996 PST | @ 1095 days
-     | Fri Dec 31 17:32:01 1999 PST | Fri Dec 31 17:32:01 1999 PST | @ 0
-     | Fri Dec 31 17:32:01 1999 PST | Sat Jan 01 17:32:01 2000 PST | @ 1 day ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 02:14:05 2000 PST | @ 74 days 8 hours 42 mins 4 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 03:14:04 2000 PST | @ 74 days 9 hours 42 mins 3 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 08:14:01 2000 PST | @ 74 days 14 hours 42 mins ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 12:14:03 2000 PST | @ 74 days 18 hours 42 mins 2 secs ago
-     | Fri Dec 31 17:32:01 1999 PST | Wed Mar 15 13:14:02 2000 PST | @ 74 days 19 hours 42 mins 1 sec ago
-     | Fri Dec 31 17:32:01 1999 PST | Sun Dec 31 17:32:01 2000 PST | @ 366 days ago
-     | Fri Dec 31 17:32:01 1999 PST | Mon Jan 01 17:32:01 2001 PST | @ 367 days ago
-     | Fri Dec 31 17:32:01 1999 PST | Sat Sep 22 18:19:20 2001 PDT | @ 630 days 23 hours 47 mins 19 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 10957 days 17 hours 32 mins 1 sec
-     | Sat Jan 01 17:32:01 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1403 days
-     | Sat Jan 01 17:32:01 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1402 days
-     | Sat Jan 01 17:32:01 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1401 days
-     | Sat Jan 01 17:32:01 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1097 days
-     | Sat Jan 01 17:32:01 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1096 days
-     | Sat Jan 01 17:32:01 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 1 day
-     | Sat Jan 01 17:32:01 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 0
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 73 days 8 hours 42 mins 4 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 73 days 9 hours 42 mins 3 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 73 days 14 hours 42 mins ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 73 days 18 hours 42 mins 2 secs ago
-     | Sat Jan 01 17:32:01 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 73 days 19 hours 42 mins 1 sec ago
-     | Sat Jan 01 17:32:01 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 365 days ago
-     | Sat Jan 01 17:32:01 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 366 days ago
-     | Sat Jan 01 17:32:01 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 629 days 23 hours 47 mins 19 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 2 hours 14 mins 5 secs
-     | Wed Mar 15 02:14:05 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 8 hours 42 mins 4 secs
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 0
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 59 mins 59 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 5 hours 59 mins 56 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 9 hours 59 mins 58 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 10 hours 59 mins 57 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 15 hours 17 mins 56 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 15 hours 17 mins 56 secs ago
-     | Wed Mar 15 02:14:05 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 15 hours 5 mins 15 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 3 hours 14 mins 4 secs
-     | Wed Mar 15 03:14:04 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 9 hours 42 mins 3 secs
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 59 mins 59 secs
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 0
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 4 hours 59 mins 57 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 8 hours 59 mins 59 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 9 hours 59 mins 58 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 14 hours 17 mins 57 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 14 hours 17 mins 57 secs ago
-     | Wed Mar 15 03:14:04 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 14 hours 5 mins 16 secs ago
-     | Wed Mar 15 08:14:01 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 8 hours 14 mins 1 sec
-     | Wed Mar 15 08:14:01 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 14 hours 42 mins
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 5 hours 59 mins 56 secs
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 4 hours 59 mins 57 secs
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 0
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 4 hours 2 secs ago
-     | Wed Mar 15 08:14:01 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 5 hours 1 sec ago
-     | Wed Mar 15 08:14:01 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 9 hours 18 mins ago
-     | Wed Mar 15 08:14:01 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 9 hours 18 mins ago
-     | Wed Mar 15 08:14:01 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 9 hours 5 mins 19 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 12 hours 14 mins 3 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 18 hours 42 mins 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 9 hours 59 mins 58 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 8 hours 59 mins 59 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 4 hours 2 secs
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 0
-     | Wed Mar 15 12:14:03 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 59 mins 59 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 5 hours 17 mins 58 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 5 hours 17 mins 58 secs ago
-     | Wed Mar 15 12:14:03 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 5 hours 5 mins 17 secs ago
-     | Wed Mar 15 13:14:02 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11031 days 13 hours 14 mins 2 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1476 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1475 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1474 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1170 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1169 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 74 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 73 days 19 hours 42 mins 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 10 hours 59 mins 57 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 9 hours 59 mins 58 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 5 hours 1 sec
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 59 mins 59 secs
-     | Wed Mar 15 13:14:02 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 0
-     | Wed Mar 15 13:14:02 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 291 days 4 hours 17 mins 59 secs ago
-     | Wed Mar 15 13:14:02 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 292 days 4 hours 17 mins 59 secs ago
-     | Wed Mar 15 13:14:02 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 556 days 4 hours 5 mins 18 secs ago
-     | Sun Dec 31 17:32:01 2000 PST | Thu Jan 01 00:00:00 1970 PST | @ 11322 days 17 hours 32 mins 1 sec
-     | Sun Dec 31 17:32:01 2000 PST | Wed Feb 28 17:32:01 1996 PST | @ 1768 days
-     | Sun Dec 31 17:32:01 2000 PST | Thu Feb 29 17:32:01 1996 PST | @ 1767 days
-     | Sun Dec 31 17:32:01 2000 PST | Fri Mar 01 17:32:01 1996 PST | @ 1766 days
-     | Sun Dec 31 17:32:01 2000 PST | Mon Dec 30 17:32:01 1996 PST | @ 1462 days
-     | Sun Dec 31 17:32:01 2000 PST | Tue Dec 31 17:32:01 1996 PST | @ 1461 days
-     | Sun Dec 31 17:32:01 2000 PST | Fri Dec 31 17:32:01 1999 PST | @ 366 days
-     | Sun Dec 31 17:32:01 2000 PST | Sat Jan 01 17:32:01 2000 PST | @ 365 days
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 02:14:05 2000 PST | @ 291 days 15 hours 17 mins 56 secs
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 03:14:04 2000 PST | @ 291 days 14 hours 17 mins 57 secs
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 08:14:01 2000 PST | @ 291 days 9 hours 18 mins
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 12:14:03 2000 PST | @ 291 days 5 hours 17 mins 58 secs
-     | Sun Dec 31 17:32:01 2000 PST | Wed Mar 15 13:14:02 2000 PST | @ 291 days 4 hours 17 mins 59 secs
-     | Sun Dec 31 17:32:01 2000 PST | Sun Dec 31 17:32:01 2000 PST | @ 0
-     | Sun Dec 31 17:32:01 2000 PST | Mon Jan 01 17:32:01 2001 PST | @ 1 day ago
-     | Sun Dec 31 17:32:01 2000 PST | Sat Sep 22 18:19:20 2001 PDT | @ 264 days 23 hours 47 mins 19 secs ago
-     | Mon Jan 01 17:32:01 2001 PST | Thu Jan 01 00:00:00 1970 PST | @ 11323 days 17 hours 32 mins 1 sec
-     | Mon Jan 01 17:32:01 2001 PST | Wed Feb 28 17:32:01 1996 PST | @ 1769 days
-     | Mon Jan 01 17:32:01 2001 PST | Thu Feb 29 17:32:01 1996 PST | @ 1768 days
-     | Mon Jan 01 17:32:01 2001 PST | Fri Mar 01 17:32:01 1996 PST | @ 1767 days
-     | Mon Jan 01 17:32:01 2001 PST | Mon Dec 30 17:32:01 1996 PST | @ 1463 days
-     | Mon Jan 01 17:32:01 2001 PST | Tue Dec 31 17:32:01 1996 PST | @ 1462 days
-     | Mon Jan 01 17:32:01 2001 PST | Fri Dec 31 17:32:01 1999 PST | @ 367 days
-     | Mon Jan 01 17:32:01 2001 PST | Sat Jan 01 17:32:01 2000 PST | @ 366 days
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 02:14:05 2000 PST | @ 292 days 15 hours 17 mins 56 secs
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 03:14:04 2000 PST | @ 292 days 14 hours 17 mins 57 secs
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 08:14:01 2000 PST | @ 292 days 9 hours 18 mins
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 12:14:03 2000 PST | @ 292 days 5 hours 17 mins 58 secs
-     | Mon Jan 01 17:32:01 2001 PST | Wed Mar 15 13:14:02 2000 PST | @ 292 days 4 hours 17 mins 59 secs
-     | Mon Jan 01 17:32:01 2001 PST | Sun Dec 31 17:32:01 2000 PST | @ 1 day
-     | Mon Jan 01 17:32:01 2001 PST | Mon Jan 01 17:32:01 2001 PST | @ 0
-     | Mon Jan 01 17:32:01 2001 PST | Sat Sep 22 18:19:20 2001 PDT | @ 263 days 23 hours 47 mins 19 secs ago
-     | Sat Sep 22 18:19:20 2001 PDT | Thu Jan 01 00:00:00 1970 PST | @ 11587 days 17 hours 19 mins 20 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Feb 28 17:32:01 1996 PST | @ 2032 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Thu Feb 29 17:32:01 1996 PST | @ 2031 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Fri Mar 01 17:32:01 1996 PST | @ 2030 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Mon Dec 30 17:32:01 1996 PST | @ 1726 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Tue Dec 31 17:32:01 1996 PST | @ 1725 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Fri Dec 31 17:32:01 1999 PST | @ 630 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Sat Jan 01 17:32:01 2000 PST | @ 629 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 02:14:05 2000 PST | @ 556 days 15 hours 5 mins 15 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 03:14:04 2000 PST | @ 556 days 14 hours 5 mins 16 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 08:14:01 2000 PST | @ 556 days 9 hours 5 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 12:14:03 2000 PST | @ 556 days 5 hours 5 mins 17 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Wed Mar 15 13:14:02 2000 PST | @ 556 days 4 hours 5 mins 18 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Sun Dec 31 17:32:01 2000 PST | @ 264 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Mon Jan 01 17:32:01 2001 PST | @ 263 days 23 hours 47 mins 19 secs
-     | Sat Sep 22 18:19:20 2001 PDT | Sat Sep 22 18:19:20 2001 PDT | @ 0
+ 226 |          timestamp1          |          timestamp2          |      difference       
+-----+------------------------------+------------------------------+-----------------------
+     | Thu Jan 01 00:00:00 1970 -05 | Thu Jan 01 00:00:00 1970 -05 | 00:00:00
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Feb 28 17:32:01 1996 -05 | -9554 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Thu Feb 29 17:32:01 1996 -05 | -9555 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Fri Mar 01 17:32:01 1996 -05 | -9556 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Mon Dec 30 17:32:01 1996 -05 | -9860 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Tue Dec 31 17:32:01 1996 -05 | -9861 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Fri Dec 31 17:32:01 1999 -05 | -10956 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Sat Jan 01 17:32:01 2000 -05 | -10957 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 02:14:05 2000 -05 | -11031 days -02:14:05
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 03:14:04 2000 -05 | -11031 days -03:14:04
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 08:14:01 2000 -05 | -11031 days -08:14:01
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 12:14:03 2000 -05 | -11031 days -12:14:03
+     | Thu Jan 01 00:00:00 1970 -05 | Wed Mar 15 13:14:02 2000 -05 | -11031 days -13:14:02
+     | Thu Jan 01 00:00:00 1970 -05 | Sun Dec 31 17:32:01 2000 -05 | -11322 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Mon Jan 01 17:32:01 2001 -05 | -11323 days -17:32:01
+     | Thu Jan 01 00:00:00 1970 -05 | Sat Sep 22 18:19:20 2001 -05 | -11587 days -18:19:20
+     | Wed Feb 28 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9554 days 17:32:01
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 00:00:00
+     | Wed Feb 28 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | -1 days
+     | Wed Feb 28 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | -2 days
+     | Wed Feb 28 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | -306 days
+     | Wed Feb 28 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -307 days
+     | Wed Feb 28 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1402 days
+     | Wed Feb 28 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1403 days
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1476 days -08:42:04
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1476 days -09:42:03
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1476 days -14:42:00
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1476 days -18:42:02
+     | Wed Feb 28 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1476 days -19:42:01
+     | Wed Feb 28 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1768 days
+     | Wed Feb 28 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1769 days
+     | Wed Feb 28 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -2033 days -00:47:19
+     | Thu Feb 29 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9555 days 17:32:01
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 1 day
+     | Thu Feb 29 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 00:00:00
+     | Thu Feb 29 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | -1 days
+     | Thu Feb 29 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | -305 days
+     | Thu Feb 29 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -306 days
+     | Thu Feb 29 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1401 days
+     | Thu Feb 29 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1402 days
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1475 days -08:42:04
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1475 days -09:42:03
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1475 days -14:42:00
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1475 days -18:42:02
+     | Thu Feb 29 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1475 days -19:42:01
+     | Thu Feb 29 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1767 days
+     | Thu Feb 29 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1768 days
+     | Thu Feb 29 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -2032 days -00:47:19
+     | Fri Mar 01 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9556 days 17:32:01
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 2 days
+     | Fri Mar 01 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 1 day
+     | Fri Mar 01 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | 00:00:00
+     | Fri Mar 01 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | -304 days
+     | Fri Mar 01 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -305 days
+     | Fri Mar 01 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1400 days
+     | Fri Mar 01 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1401 days
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1474 days -08:42:04
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1474 days -09:42:03
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1474 days -14:42:00
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1474 days -18:42:02
+     | Fri Mar 01 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1474 days -19:42:01
+     | Fri Mar 01 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1766 days
+     | Fri Mar 01 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1767 days
+     | Fri Mar 01 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -2031 days -00:47:19
+     | Mon Dec 30 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9860 days 17:32:01
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 306 days
+     | Mon Dec 30 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 305 days
+     | Mon Dec 30 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | 304 days
+     | Mon Dec 30 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | 00:00:00
+     | Mon Dec 30 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | -1 days
+     | Mon Dec 30 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1096 days
+     | Mon Dec 30 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1097 days
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1170 days -08:42:04
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1170 days -09:42:03
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1170 days -14:42:00
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1170 days -18:42:02
+     | Mon Dec 30 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1170 days -19:42:01
+     | Mon Dec 30 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1462 days
+     | Mon Dec 30 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1463 days
+     | Mon Dec 30 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -1727 days -00:47:19
+     | Tue Dec 31 17:32:01 1996 -05 | Thu Jan 01 00:00:00 1970 -05 | 9861 days 17:32:01
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Feb 28 17:32:01 1996 -05 | 307 days
+     | Tue Dec 31 17:32:01 1996 -05 | Thu Feb 29 17:32:01 1996 -05 | 306 days
+     | Tue Dec 31 17:32:01 1996 -05 | Fri Mar 01 17:32:01 1996 -05 | 305 days
+     | Tue Dec 31 17:32:01 1996 -05 | Mon Dec 30 17:32:01 1996 -05 | 1 day
+     | Tue Dec 31 17:32:01 1996 -05 | Tue Dec 31 17:32:01 1996 -05 | 00:00:00
+     | Tue Dec 31 17:32:01 1996 -05 | Fri Dec 31 17:32:01 1999 -05 | -1095 days
+     | Tue Dec 31 17:32:01 1996 -05 | Sat Jan 01 17:32:01 2000 -05 | -1096 days
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 02:14:05 2000 -05 | -1169 days -08:42:04
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 03:14:04 2000 -05 | -1169 days -09:42:03
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 08:14:01 2000 -05 | -1169 days -14:42:00
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 12:14:03 2000 -05 | -1169 days -18:42:02
+     | Tue Dec 31 17:32:01 1996 -05 | Wed Mar 15 13:14:02 2000 -05 | -1169 days -19:42:01
+     | Tue Dec 31 17:32:01 1996 -05 | Sun Dec 31 17:32:01 2000 -05 | -1461 days
+     | Tue Dec 31 17:32:01 1996 -05 | Mon Jan 01 17:32:01 2001 -05 | -1462 days
+     | Tue Dec 31 17:32:01 1996 -05 | Sat Sep 22 18:19:20 2001 -05 | -1726 days -00:47:19
+     | Fri Dec 31 17:32:01 1999 -05 | Thu Jan 01 00:00:00 1970 -05 | 10956 days 17:32:01
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Feb 28 17:32:01 1996 -05 | 1402 days
+     | Fri Dec 31 17:32:01 1999 -05 | Thu Feb 29 17:32:01 1996 -05 | 1401 days
+     | Fri Dec 31 17:32:01 1999 -05 | Fri Mar 01 17:32:01 1996 -05 | 1400 days
+     | Fri Dec 31 17:32:01 1999 -05 | Mon Dec 30 17:32:01 1996 -05 | 1096 days
+     | Fri Dec 31 17:32:01 1999 -05 | Tue Dec 31 17:32:01 1996 -05 | 1095 days
+     | Fri Dec 31 17:32:01 1999 -05 | Fri Dec 31 17:32:01 1999 -05 | 00:00:00
+     | Fri Dec 31 17:32:01 1999 -05 | Sat Jan 01 17:32:01 2000 -05 | -1 days
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 02:14:05 2000 -05 | -74 days -08:42:04
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 03:14:04 2000 -05 | -74 days -09:42:03
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 08:14:01 2000 -05 | -74 days -14:42:00
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 12:14:03 2000 -05 | -74 days -18:42:02
+     | Fri Dec 31 17:32:01 1999 -05 | Wed Mar 15 13:14:02 2000 -05 | -74 days -19:42:01
+     | Fri Dec 31 17:32:01 1999 -05 | Sun Dec 31 17:32:01 2000 -05 | -366 days
+     | Fri Dec 31 17:32:01 1999 -05 | Mon Jan 01 17:32:01 2001 -05 | -367 days
+     | Fri Dec 31 17:32:01 1999 -05 | Sat Sep 22 18:19:20 2001 -05 | -631 days -00:47:19
+     | Sat Jan 01 17:32:01 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 10957 days 17:32:01
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1403 days
+     | Sat Jan 01 17:32:01 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1402 days
+     | Sat Jan 01 17:32:01 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1401 days
+     | Sat Jan 01 17:32:01 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1097 days
+     | Sat Jan 01 17:32:01 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1096 days
+     | Sat Jan 01 17:32:01 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 1 day
+     | Sat Jan 01 17:32:01 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 00:00:00
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | -73 days -08:42:04
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | -73 days -09:42:03
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | -73 days -14:42:00
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -73 days -18:42:02
+     | Sat Jan 01 17:32:01 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -73 days -19:42:01
+     | Sat Jan 01 17:32:01 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -365 days
+     | Sat Jan 01 17:32:01 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -366 days
+     | Sat Jan 01 17:32:01 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -630 days -00:47:19
+     | Wed Mar 15 02:14:05 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 02:14:05
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 08:42:04
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 00:00:00
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | -00:59:59
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | -05:59:56
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -09:59:58
+     | Wed Mar 15 02:14:05 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -10:59:57
+     | Wed Mar 15 02:14:05 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -15:17:56
+     | Wed Mar 15 02:14:05 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -15:17:56
+     | Wed Mar 15 02:14:05 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -16:05:15
+     | Wed Mar 15 03:14:04 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 03:14:04
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 09:42:03
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 00:59:59
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 00:00:00
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | -04:59:57
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -08:59:59
+     | Wed Mar 15 03:14:04 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -09:59:58
+     | Wed Mar 15 03:14:04 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -14:17:57
+     | Wed Mar 15 03:14:04 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -14:17:57
+     | Wed Mar 15 03:14:04 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -15:05:16
+     | Wed Mar 15 08:14:01 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 08:14:01
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 14:42:00
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 05:59:56
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 04:59:57
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 00:00:00
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | -04:00:02
+     | Wed Mar 15 08:14:01 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -05:00:01
+     | Wed Mar 15 08:14:01 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -09:18:00
+     | Wed Mar 15 08:14:01 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -09:18:00
+     | Wed Mar 15 08:14:01 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -10:05:19
+     | Wed Mar 15 12:14:03 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 12:14:03
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 18:42:02
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 09:59:58
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 08:59:59
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 04:00:02
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | 00:00:00
+     | Wed Mar 15 12:14:03 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | -00:59:59
+     | Wed Mar 15 12:14:03 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -05:17:58
+     | Wed Mar 15 12:14:03 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -05:17:58
+     | Wed Mar 15 12:14:03 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -06:05:17
+     | Wed Mar 15 13:14:02 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11031 days 13:14:02
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1476 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1475 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1474 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1170 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1169 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 74 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 73 days 19:42:01
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 10:59:57
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 09:59:58
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 05:00:01
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | 00:59:59
+     | Wed Mar 15 13:14:02 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | 00:00:00
+     | Wed Mar 15 13:14:02 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | -291 days -04:17:59
+     | Wed Mar 15 13:14:02 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -292 days -04:17:59
+     | Wed Mar 15 13:14:02 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -556 days -05:05:18
+     | Sun Dec 31 17:32:01 2000 -05 | Thu Jan 01 00:00:00 1970 -05 | 11322 days 17:32:01
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Feb 28 17:32:01 1996 -05 | 1768 days
+     | Sun Dec 31 17:32:01 2000 -05 | Thu Feb 29 17:32:01 1996 -05 | 1767 days
+     | Sun Dec 31 17:32:01 2000 -05 | Fri Mar 01 17:32:01 1996 -05 | 1766 days
+     | Sun Dec 31 17:32:01 2000 -05 | Mon Dec 30 17:32:01 1996 -05 | 1462 days
+     | Sun Dec 31 17:32:01 2000 -05 | Tue Dec 31 17:32:01 1996 -05 | 1461 days
+     | Sun Dec 31 17:32:01 2000 -05 | Fri Dec 31 17:32:01 1999 -05 | 366 days
+     | Sun Dec 31 17:32:01 2000 -05 | Sat Jan 01 17:32:01 2000 -05 | 365 days
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 02:14:05 2000 -05 | 291 days 15:17:56
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 03:14:04 2000 -05 | 291 days 14:17:57
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 08:14:01 2000 -05 | 291 days 09:18:00
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 12:14:03 2000 -05 | 291 days 05:17:58
+     | Sun Dec 31 17:32:01 2000 -05 | Wed Mar 15 13:14:02 2000 -05 | 291 days 04:17:59
+     | Sun Dec 31 17:32:01 2000 -05 | Sun Dec 31 17:32:01 2000 -05 | 00:00:00
+     | Sun Dec 31 17:32:01 2000 -05 | Mon Jan 01 17:32:01 2001 -05 | -1 days
+     | Sun Dec 31 17:32:01 2000 -05 | Sat Sep 22 18:19:20 2001 -05 | -265 days -00:47:19
+     | Mon Jan 01 17:32:01 2001 -05 | Thu Jan 01 00:00:00 1970 -05 | 11323 days 17:32:01
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Feb 28 17:32:01 1996 -05 | 1769 days
+     | Mon Jan 01 17:32:01 2001 -05 | Thu Feb 29 17:32:01 1996 -05 | 1768 days
+     | Mon Jan 01 17:32:01 2001 -05 | Fri Mar 01 17:32:01 1996 -05 | 1767 days
+     | Mon Jan 01 17:32:01 2001 -05 | Mon Dec 30 17:32:01 1996 -05 | 1463 days
+     | Mon Jan 01 17:32:01 2001 -05 | Tue Dec 31 17:32:01 1996 -05 | 1462 days
+     | Mon Jan 01 17:32:01 2001 -05 | Fri Dec 31 17:32:01 1999 -05 | 367 days
+     | Mon Jan 01 17:32:01 2001 -05 | Sat Jan 01 17:32:01 2000 -05 | 366 days
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 02:14:05 2000 -05 | 292 days 15:17:56
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 03:14:04 2000 -05 | 292 days 14:17:57
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 08:14:01 2000 -05 | 292 days 09:18:00
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 12:14:03 2000 -05 | 292 days 05:17:58
+     | Mon Jan 01 17:32:01 2001 -05 | Wed Mar 15 13:14:02 2000 -05 | 292 days 04:17:59
+     | Mon Jan 01 17:32:01 2001 -05 | Sun Dec 31 17:32:01 2000 -05 | 1 day
+     | Mon Jan 01 17:32:01 2001 -05 | Mon Jan 01 17:32:01 2001 -05 | 00:00:00
+     | Mon Jan 01 17:32:01 2001 -05 | Sat Sep 22 18:19:20 2001 -05 | -264 days -00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Thu Jan 01 00:00:00 1970 -05 | 11587 days 18:19:20
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Feb 28 17:32:01 1996 -05 | 2033 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Thu Feb 29 17:32:01 1996 -05 | 2032 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Fri Mar 01 17:32:01 1996 -05 | 2031 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Mon Dec 30 17:32:01 1996 -05 | 1727 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Tue Dec 31 17:32:01 1996 -05 | 1726 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Fri Dec 31 17:32:01 1999 -05 | 631 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Sat Jan 01 17:32:01 2000 -05 | 630 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 02:14:05 2000 -05 | 556 days 16:05:15
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 03:14:04 2000 -05 | 556 days 15:05:16
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 08:14:01 2000 -05 | 556 days 10:05:19
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 12:14:03 2000 -05 | 556 days 06:05:17
+     | Sat Sep 22 18:19:20 2001 -05 | Wed Mar 15 13:14:02 2000 -05 | 556 days 05:05:18
+     | Sat Sep 22 18:19:20 2001 -05 | Sun Dec 31 17:32:01 2000 -05 | 265 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Mon Jan 01 17:32:01 2001 -05 | 264 days 00:47:19
+     | Sat Sep 22 18:19:20 2001 -05 | Sat Sep 22 18:19:20 2001 -05 | 00:00:00
 (256 rows)
 
 --
@@ -2055,22 +2053,22 @@
   ORDER BY date, "timestamp";
  16 |          timestamp           |    date    
 ----+------------------------------+------------
-    | Thu Jan 01 00:00:00 1970 PST | 01-01-1970
-    | Wed Feb 28 17:32:01 1996 PST | 02-28-1996
-    | Thu Feb 29 17:32:01 1996 PST | 02-29-1996
-    | Fri Mar 01 17:32:01 1996 PST | 03-01-1996
-    | Mon Dec 30 17:32:01 1996 PST | 12-30-1996
-    | Tue Dec 31 17:32:01 1996 PST | 12-31-1996
-    | Fri Dec 31 17:32:01 1999 PST | 12-31-1999
-    | Sat Jan 01 17:32:01 2000 PST | 01-01-2000
-    | Wed Mar 15 02:14:05 2000 PST | 03-15-2000
-    | Wed Mar 15 03:14:04 2000 PST | 03-15-2000
-    | Wed Mar 15 08:14:01 2000 PST | 03-15-2000
-    | Wed Mar 15 12:14:03 2000 PST | 03-15-2000
-    | Wed Mar 15 13:14:02 2000 PST | 03-15-2000
-    | Sun Dec 31 17:32:01 2000 PST | 12-31-2000
-    | Mon Jan 01 17:32:01 2001 PST | 01-01-2001
-    | Sat Sep 22 18:19:20 2001 PDT | 09-22-2001
+    | Thu Jan 01 00:00:00 1970 -05 | 01-01-1970
+    | Wed Feb 28 17:32:01 1996 -05 | 02-28-1996
+    | Thu Feb 29 17:32:01 1996 -05 | 02-29-1996
+    | Fri Mar 01 17:32:01 1996 -05 | 03-01-1996
+    | Mon Dec 30 17:32:01 1996 -05 | 12-30-1996
+    | Tue Dec 31 17:32:01 1996 -05 | 12-31-1996
+    | Fri Dec 31 17:32:01 1999 -05 | 12-31-1999
+    | Sat Jan 01 17:32:01 2000 -05 | 01-01-2000
+    | Wed Mar 15 02:14:05 2000 -05 | 03-15-2000
+    | Wed Mar 15 03:14:04 2000 -05 | 03-15-2000
+    | Wed Mar 15 08:14:01 2000 -05 | 03-15-2000
+    | Wed Mar 15 12:14:03 2000 -05 | 03-15-2000
+    | Wed Mar 15 13:14:02 2000 -05 | 03-15-2000
+    | Sun Dec 31 17:32:01 2000 -05 | 12-31-2000
+    | Mon Jan 01 17:32:01 2001 -05 | 01-01-2001
+    | Sat Sep 22 18:19:20 2001 -05 | 09-22-2001
 (16 rows)
 
 DROP TABLE TEMP_TIMESTAMP;
@@ -2115,7 +2113,7 @@
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
-    | Mon Feb 10 17:32:01 1997
+    | Thu Oct 02 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
     | Mon Feb 10 17:32:01 1997
@@ -2186,7 +2184,7 @@
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
-    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
@@ -2263,7 +2261,7 @@
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
-    | 02/10/1997 17:32:01
+    | 10/02/1997 17:32:01
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
     | 02/10/1997 17:32:01
@@ -2347,7 +2345,7 @@
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
-    | Mon 10 Feb 17:32:01 1997
+    | Thu 02 Oct 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
     | Mon 10 Feb 17:32:01 1997
@@ -2425,7 +2423,7 @@
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
-    | 1997-02-10 17:32:01
+    | 1997-10-02 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
     | 1997-02-10 17:32:01
@@ -2503,7 +2501,7 @@
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
-    | 10/02/1997 17:32:01
+    | 02/10/1997 17:32:01
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
     | 10/02/1997 17:32:01
@@ -2550,384 +2548,384 @@
 SELECT to_timestamp('0097/Feb/16 --> 08:14:30', 'YYYY/Mon/DD --> HH:MI:SS');
          to_timestamp         
 ------------------------------
- Sat Feb 16 08:14:30 0097 PST
+ 0097-02-16 08:14:30-05:19:20
 (1 row)
 
 SELECT to_timestamp('97/2/16 8:14:30', 'FMYYYY/FMMM/FMDD FMHH:FMMI:FMSS');
          to_timestamp         
 ------------------------------
- Sat Feb 16 08:14:30 0097 PST
+ 0097-02-16 08:14:30-05:19:20
 (1 row)
 
 SELECT to_timestamp('2011$03!18 23_38_15', 'YYYY-MM-DD HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Fri Mar 18 23:38:15 2011 PDT
+      to_timestamp      
+------------------------
+ 2011-03-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('1985 January 12', 'YYYY FMMonth DD');
-         to_timestamp         
-------------------------------
- Sat Jan 12 00:00:00 1985 PST
+      to_timestamp      
+------------------------
+ 1985-01-12 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1985 FMMonth 12', 'YYYY "FMMonth" DD');
-         to_timestamp         
-------------------------------
- Sat Jan 12 00:00:00 1985 PST
+      to_timestamp      
+------------------------
+ 1985-01-12 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1985 \ 12', 'YYYY \\ DD');
-         to_timestamp         
-------------------------------
- Sat Jan 12 00:00:00 1985 PST
+      to_timestamp      
+------------------------
+ 1985-01-12 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('My birthday-> Year: 1976, Month: May, Day: 16',
                     '"My birthday-> Year:" YYYY, "Month:" FMMonth, "Day:" DD');
-         to_timestamp         
-------------------------------
- Sun May 16 00:00:00 1976 PDT
+      to_timestamp      
+------------------------
+ 1976-05-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1,582nd VIII 21', 'Y,YYYth FMRM DD');
          to_timestamp         
 ------------------------------
- Sat Aug 21 00:00:00 1582 PST
+ 1582-08-21 00:00:00-05:19:20
 (1 row)
 
 SELECT to_timestamp('15 "text between quote marks" 98 54 45',
                     E'HH24 "\\"text between quote marks\\"" YY MI SS');
-         to_timestamp         
-------------------------------
- Thu Jan 01 15:54:45 1998 PST
+      to_timestamp      
+------------------------
+ 1998-01-01 15:54:45-05
 (1 row)
 
 SELECT to_timestamp('05121445482000', 'MMDDHH24MISSYYYY');
-         to_timestamp         
-------------------------------
- Fri May 12 14:45:48 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-05-12 14:45:48-05
 (1 row)
 
 SELECT to_timestamp('2000January09Sunday', 'YYYYFMMonthDDFMDay');
-         to_timestamp         
-------------------------------
- Sun Jan 09 00:00:00 2000 PST
+      to_timestamp      
+------------------------
+ 2000-01-09 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('97/Feb/16', 'YYMonDD');
 ERROR:  invalid value "/Fe" for "Mon"
 DETAIL:  The given value did not match any of the allowed values for this field.
 SELECT to_timestamp('97/Feb/16', 'YY:Mon:DD');
-         to_timestamp         
-------------------------------
- Sun Feb 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-02-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('97/Feb/16', 'FXYY:Mon:DD');
-         to_timestamp         
-------------------------------
- Sun Feb 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-02-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('97/Feb/16', 'FXYY/Mon/DD');
-         to_timestamp         
-------------------------------
- Sun Feb 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-02-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('19971116', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Sun Nov 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('20000-1116', 'YYYY-MMDD');
-         to_timestamp          
--------------------------------
- Thu Nov 16 00:00:00 20000 PST
+      to_timestamp       
+-------------------------
+ 20000-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1997 AD 11 16', 'YYYY BC MM DD');
-         to_timestamp         
-------------------------------
- Sun Nov 16 00:00:00 1997 PST
+      to_timestamp      
+------------------------
+ 1997-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('1997 BC 11 16', 'YYYY BC MM DD');
           to_timestamp           
 ---------------------------------
- Tue Nov 16 00:00:00 1997 PST BC
+ 1997-11-16 00:00:00-05:19:20 BC
 (1 row)
 
 SELECT to_timestamp('9-1116', 'Y-MMDD');
-         to_timestamp         
-------------------------------
- Mon Nov 16 00:00:00 2009 PST
+      to_timestamp      
+------------------------
+ 2009-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('95-1116', 'YY-MMDD');
-         to_timestamp         
-------------------------------
- Thu Nov 16 00:00:00 1995 PST
+      to_timestamp      
+------------------------
+ 1995-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('995-1116', 'YYY-MMDD');
-         to_timestamp         
-------------------------------
- Thu Nov 16 00:00:00 1995 PST
+      to_timestamp      
+------------------------
+ 1995-11-16 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005426', 'YYYYWWD');
-         to_timestamp         
-------------------------------
- Sat Oct 15 00:00:00 2005 PDT
+      to_timestamp      
+------------------------
+ 2005-10-15 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005300', 'YYYYDDD');
-         to_timestamp         
-------------------------------
- Thu Oct 27 00:00:00 2005 PDT
+      to_timestamp      
+------------------------
+ 2005-10-27 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005527', 'IYYYIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('005527', 'IYYIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('05527', 'IYIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('5527', 'IIWID');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005364', 'IYYYIDDD');
-         to_timestamp         
-------------------------------
- Sun Jan 01 00:00:00 2006 PST
+      to_timestamp      
+------------------------
+ 2006-01-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('20050302', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2005 03 02', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp(' 2005 03 02', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('  20050302', 'YYYYMMDD');
-         to_timestamp         
-------------------------------
- Wed Mar 02 00:00:00 2005 PST
+      to_timestamp      
+------------------------
+ 2005-03-02 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 AM', 'YYYY-MM-DD HH12:MI PM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 11:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 11:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 PM', 'YYYY-MM-DD HH12:MI PM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 +05',    'YYYY-MM-DD HH12:MI TZH');
-         to_timestamp         
-------------------------------
- Sat Dec 17 22:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 01:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 -05',    'YYYY-MM-DD HH12:MI TZH');
-         to_timestamp         
-------------------------------
- Sun Dec 18 08:38:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 11:38:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 +05:20', 'YYYY-MM-DD HH12:MI TZH:TZM');
-         to_timestamp         
-------------------------------
- Sat Dec 17 22:18:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 01:18:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 -05:20', 'YYYY-MM-DD HH12:MI TZH:TZM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 08:58:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 11:58:00-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18 11:38 20',     'YYYY-MM-DD HH12:MI TZM');
-         to_timestamp         
-------------------------------
- Sun Dec 18 03:18:00 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 06:18:00-05
 (1 row)
 
 --
 -- Check handling of multiple spaces in format and/or input
 --
 SELECT to_timestamp('2011-12-18 23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18   23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD  HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2011-12-18  23:38:15', 'YYYY-MM-DD   HH24:MI:SS');
-         to_timestamp         
-------------------------------
- Sun Dec 18 23:38:15 2011 PST
+      to_timestamp      
+------------------------
+ 2011-12-18 23:38:15-05
 (1 row)
 
 SELECT to_timestamp('2000+   JUN', 'YYYY/MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('  2000 +JUN', 'YYYY/MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp(' 2000 +JUN', 'YYYY//MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000  +JUN', 'YYYY//MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 + JUN', 'YYYY MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 ++ JUN', 'YYYY  MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 + + JUN', 'YYYY  MON');
 ERROR:  invalid value "+ J" for "MON"
 DETAIL:  The given value did not match any of the allowed values for this field.
 SELECT to_timestamp('2000 + + JUN', 'YYYY   MON');
-         to_timestamp         
-------------------------------
- Thu Jun 01 00:00:00 2000 PDT
+      to_timestamp      
+------------------------
+ 2000-06-01 00:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 -10', 'YYYY TZH');
-         to_timestamp         
-------------------------------
- Sat Jan 01 02:00:00 2000 PST
+      to_timestamp      
+------------------------
+ 2000-01-01 05:00:00-05
 (1 row)
 
 SELECT to_timestamp('2000 -10', 'YYYY  TZH');
-         to_timestamp         
-------------------------------
- Fri Dec 31 06:00:00 1999 PST
+      to_timestamp      
+------------------------
+ 1999-12-31 09:00:00-05
 (1 row)
 
 SELECT to_date('2011 12  18', 'YYYY MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12  18', 'YYYY MM  DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12  18', 'YYYY MM   DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12 18', 'YYYY  MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011  12 18', 'YYYY  MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011   12 18', 'YYYY  MM DD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 12 18', 'YYYYxMMxDD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011x 12x 18', 'YYYYxMMxDD');
   to_date   
 ------------
- 12-18-2011
+ 2011-12-18
 (1 row)
 
 SELECT to_date('2011 x12 x18', 'YYYYxMMxDD');
@@ -2970,9 +2968,9 @@
 SELECT to_timestamp('2016-06-13 15:50:60', 'YYYY-MM-DD HH24:MI:SS');
 ERROR:  date/time field value out of range: "2016-06-13 15:50:60"
 SELECT to_timestamp('2016-06-13 15:50:55', 'YYYY-MM-DD HH24:MI:SS');  -- ok
-         to_timestamp         
-------------------------------
- Mon Jun 13 15:50:55 2016 PDT
+      to_timestamp      
+------------------------
+ 2016-06-13 15:50:55-05
 (1 row)
 
 SELECT to_timestamp('2016-06-13 15:50:55', 'YYYY-MM-DD HH:MI:SS');
@@ -2983,17 +2981,17 @@
 SELECT to_timestamp('2016-02-30 15:50:55', 'YYYY-MM-DD HH24:MI:SS');
 ERROR:  date/time field value out of range: "2016-02-30 15:50:55"
 SELECT to_timestamp('2016-02-29 15:50:55', 'YYYY-MM-DD HH24:MI:SS');  -- ok
-         to_timestamp         
-------------------------------
- Mon Feb 29 15:50:55 2016 PST
+      to_timestamp      
+------------------------
+ 2016-02-29 15:50:55-05
 (1 row)
 
 SELECT to_timestamp('2015-02-29 15:50:55', 'YYYY-MM-DD HH24:MI:SS');
 ERROR:  date/time field value out of range: "2015-02-29 15:50:55"
 SELECT to_timestamp('2015-02-11 86000', 'YYYY-MM-DD SSSS');  -- ok
-         to_timestamp         
-------------------------------
- Wed Feb 11 23:53:20 2015 PST
+      to_timestamp      
+------------------------
+ 2015-02-11 23:53:20-05
 (1 row)
 
 SELECT to_timestamp('2015-02-11 86400', 'YYYY-MM-DD SSSS');
@@ -3005,7 +3003,7 @@
 SELECT to_date('2016-02-29', 'YYYY-MM-DD');  -- ok
   to_date   
 ------------
- 02-29-2016
+ 2016-02-29
 (1 row)
 
 SELECT to_date('2015-02-29', 'YYYY-MM-DD');
@@ -3013,7 +3011,7 @@
 SELECT to_date('2015 365', 'YYYY DDD');  -- ok
   to_date   
 ------------
- 12-31-2015
+ 2015-12-31
 (1 row)
 
 SELECT to_date('2015 366', 'YYYY DDD');
@@ -3021,13 +3019,13 @@
 SELECT to_date('2016 365', 'YYYY DDD');  -- ok
   to_date   
 ------------
- 12-30-2016
+ 2016-12-30
 (1 row)
 
 SELECT to_date('2016 366', 'YYYY DDD');  -- ok
   to_date   
 ------------
- 12-31-2016
+ 2016-12-31
 (1 row)
 
 SELECT to_date('2016 367', 'YYYY DDD');
@@ -3044,15 +3042,15 @@
 (1 row)
 
 SELECT '2012-12-12 12:00'::timestamptz;
-           timestamptz           
----------------------------------
- Wed Dec 12 12:00:00 2012 -01:30
+        timestamptz        
+---------------------------
+ 2012-12-12 12:00:00-01:30
 (1 row)
 
 SELECT '2012-12-12 12:00 America/New_York'::timestamptz;
-           timestamptz           
----------------------------------
- Wed Dec 12 15:30:00 2012 -01:30
+        timestamptz        
+---------------------------
+ 2012-12-12 15:30:00-01:30
 (1 row)
 
 SELECT to_char('2012-12-12 12:00'::timestamptz, 'YYYY-MM-DD HH:MI:SS TZ');
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/expressions.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/expressions.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/expressions.out	2019-08-12 14:55:05.422229943 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/expressions.out	2019-09-05 16:23:01.787950713 -0500
@@ -97,7 +97,7 @@
 -----------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: ((f1 >= '01-01-1997'::date) AND (f1 <= '01-01-1998'::date))
+         Filter: ((f1 >= '1997-01-01'::date) AND (f1 <= '1998-01-01'::date))
 (3 rows)
 
 select count(*) from date_tbl
@@ -114,7 +114,7 @@
 --------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: ((f1 < '01-01-1997'::date) OR (f1 > '01-01-1998'::date))
+         Filter: ((f1 < '1997-01-01'::date) OR (f1 > '1998-01-01'::date))
 (3 rows)
 
 select count(*) from date_tbl
@@ -131,7 +131,7 @@
 ----------------------------------------------------------------------------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: (((f1 >= '01-01-1997'::date) AND (f1 <= '01-01-1998'::date)) OR ((f1 >= '01-01-1998'::date) AND (f1 <= '01-01-1997'::date)))
+         Filter: (((f1 >= '1997-01-01'::date) AND (f1 <= '1998-01-01'::date)) OR ((f1 >= '1998-01-01'::date) AND (f1 <= '1997-01-01'::date)))
 (3 rows)
 
 select count(*) from date_tbl
@@ -148,7 +148,7 @@
 -----------------------------------------------------------------------------------------------------------------------------------------
  Aggregate
    ->  Seq Scan on date_tbl
-         Filter: (((f1 < '01-01-1997'::date) OR (f1 > '01-01-1998'::date)) AND ((f1 < '01-01-1998'::date) OR (f1 > '01-01-1997'::date)))
+         Filter: (((f1 < '1997-01-01'::date) OR (f1 > '1998-01-01'::date)) AND ((f1 < '1998-01-01'::date) OR (f1 > '1997-01-01'::date)))
 (3 rows)
 
 select count(*) from date_tbl
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/arrays.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/arrays.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/arrays.out	2019-07-12 13:20:36.181293455 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/arrays.out	2019-09-05 16:23:29.294298723 -0500
@@ -1450,9 +1450,9 @@
 (1 row)
 
 select '{0 second  ,0 second}'::interval[];
-   interval    
----------------
- {"@ 0","@ 0"}
+      interval       
+---------------------
+ {00:00:00,00:00:00}
 (1 row)
 
 select '{ { "," } , { 3 } }'::text[];
@@ -1471,9 +1471,9 @@
            0 second,
            @ 1 hour @ 42 minutes @ 20 seconds
          }'::interval[];
-              interval              
-------------------------------------
- {"@ 0","@ 1 hour 42 mins 20 secs"}
+      interval       
+---------------------
+ {00:00:00,01:42:20}
 (1 row)
 
 select array[]::text[];
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/generated.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/generated.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/generated.out	2019-08-12 14:55:05.426230282 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/generated.out	2019-09-05 16:23:43.847540665 -0500
@@ -556,13 +556,13 @@
 SELECT * FROM gtest_parent;
      f1     | f2 | f3 
 ------------+----+----
- 07-15-2016 |  1 |  2
+ 2016-07-15 |  1 |  2
 (1 row)
 
 SELECT * FROM gtest_child;
      f1     | f2 | f3 
 ------------+----+----
- 07-15-2016 |  1 |  2
+ 2016-07-15 |  1 |  2
 (1 row)
 
 DROP TABLE gtest_parent;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rules.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rules.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rules.out	2019-08-12 14:55:05.454232660 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rules.out	2019-09-05 16:23:59.096841753 -0500
@@ -1038,9 +1038,9 @@
                                     );
 UPDATE shoelace_data SET sl_avail = 6 WHERE  sl_name = 'sl7';
 SELECT * FROM shoelace_log;
-  sl_name   | sl_avail | log_who  |         log_when         
-------------+----------+----------+--------------------------
- sl7        |        6 | Al Bundy | Thu Jan 01 00:00:00 1970
+  sl_name   | sl_avail | log_who  |      log_when       
+------------+----------+----------+---------------------
+ sl7        |        6 | Al Bundy | 1970-01-01 00:00:00
 (1 row)
 
     CREATE RULE shoelace_ins AS ON INSERT TO shoelace
@@ -1108,12 +1108,12 @@
 (8 rows)
 
 SELECT * FROM shoelace_log ORDER BY sl_name;
-  sl_name   | sl_avail | log_who  |         log_when         
-------------+----------+----------+--------------------------
- sl3        |       10 | Al Bundy | Thu Jan 01 00:00:00 1970
- sl6        |       20 | Al Bundy | Thu Jan 01 00:00:00 1970
- sl7        |        6 | Al Bundy | Thu Jan 01 00:00:00 1970
- sl8        |       21 | Al Bundy | Thu Jan 01 00:00:00 1970
+  sl_name   | sl_avail | log_who  |      log_when       
+------------+----------+----------+---------------------
+ sl3        |       10 | Al Bundy | 1970-01-01 00:00:00
+ sl6        |       20 | Al Bundy | 1970-01-01 00:00:00
+ sl7        |        6 | Al Bundy | 1970-01-01 00:00:00
+ sl8        |       21 | Al Bundy | 1970-01-01 00:00:00
 (4 rows)
 
     CREATE VIEW shoelace_obsolete AS
@@ -2562,7 +2562,7 @@
 shoelace_data|log_shoelace|CREATE RULE log_shoelace AS
     ON UPDATE TO public.shoelace_data
    WHERE (new.sl_avail <> old.sl_avail) DO  INSERT INTO shoelace_log (sl_name, sl_avail, log_who, log_when)
-  VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, 'Thu Jan 01 00:00:00 1970'::timestamp without time zone);
+  VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, '1970-01-01 00:00:00'::timestamp without time zone);
 shoelace_ok|shoelace_ok_ins|CREATE RULE shoelace_ok_ins AS
     ON INSERT TO public.shoelace_ok DO INSTEAD  UPDATE shoelace SET sl_avail = (shoelace.sl_avail + new.ok_quant)
   WHERE (shoelace.sl_name = new.ok_name);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/psql.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/psql.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/psql.out	2019-08-12 14:55:15.923121444 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/psql.out	2019-09-05 16:23:59.660889873 -0500
@@ -252,7 +252,7 @@
 select '2000-01-01'::date as party_over
  party_over 
 ------------
- 01-01-2000
+ 2000-01-01
 (1 row)
 
 \unset FETCH_COUNT
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/select_views.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/select_views.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/select_views.out	2019-08-12 14:55:05.458232999 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/select_views.out	2019-09-05 16:24:07.157529396 -0500
@@ -1450,9 +1450,9 @@
 NOTICE:  f_leak => 1111-2222-3333-4444
  cid |     name      |       tel        |  passwd   |        cnum         | climit |    ymd     | usage 
 -----+---------------+------------------+-----------+---------------------+--------+------------+-------
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-05-2011 |    90
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-18-2011 |   110
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-21-2011 |   200
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-05 |    90
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-18 |   110
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-21 |   200
 (3 rows)
 
 EXPLAIN (COSTS OFF) SELECT * FROM my_credit_card_usage_normal
@@ -1462,7 +1462,7 @@
  Nested Loop
    Join Filter: (l.cid = r.cid)
    ->  Seq Scan on credit_usage r
-         Filter: ((ymd >= '10-01-2011'::date) AND (ymd < '11-01-2011'::date))
+         Filter: ((ymd >= '2011-10-01'::date) AND (ymd < '2011-11-01'::date))
    ->  Materialize
          ->  Subquery Scan on l
                Filter: f_leak(l.cnum)
@@ -1481,9 +1481,9 @@
 NOTICE:  f_leak => 1111-2222-3333-4444
  cid |     name      |       tel        |  passwd   |        cnum         | climit |    ymd     | usage 
 -----+---------------+------------------+-----------+---------------------+--------+------------+-------
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-05-2011 |    90
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-18-2011 |   110
- 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 10-21-2011 |   200
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-05 |    90
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-18 |   110
+ 101 | regress_alice | +81-12-3456-7890 | passwd123 | 1111-2222-3333-4444 |   4000 | 2011-10-21 |   200
 (3 rows)
 
 EXPLAIN (COSTS OFF) SELECT * FROM my_credit_card_usage_secure
@@ -1495,7 +1495,7 @@
    ->  Nested Loop
          Join Filter: (l.cid = r.cid)
          ->  Seq Scan on credit_usage r
-               Filter: ((ymd >= '10-01-2011'::date) AND (ymd < '11-01-2011'::date))
+               Filter: ((ymd >= '2011-10-01'::date) AND (ymd < '2011-11-01'::date))
          ->  Materialize
                ->  Hash Join
                      Hash Cond: (r_1.cid = l.cid)
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/guc.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/guc.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/guc.out	2019-08-12 14:55:05.426230282 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/guc.out	2019-09-05 16:24:11.325884964 -0500
@@ -1,9 +1,9 @@
 -- pg_regress should ensure that this default value applies; however
 -- we can't rely on any specific default value of vacuum_cost_delay
 SHOW datestyle;
-   DateStyle   
----------------
- Postgres, MDY
+ DateStyle 
+-----------
+ ISO, DMY
 (1 row)
 
 -- SET to some nondefault value
@@ -24,7 +24,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL has no effect outside of a transaction
@@ -47,7 +47,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL within a transaction that commits
@@ -69,7 +69,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 08/13/2006 12:34:56 PDT
+ 08/13/2006 12:34:56 -05
 (1 row)
 
 COMMIT;
@@ -88,7 +88,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET should be reverted after ROLLBACK
@@ -110,7 +110,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 13.08.2006 12:34:56 PDT
+ 13.08.2006 12:34:56 -05
 (1 row)
 
 ROLLBACK;
@@ -129,7 +129,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- Some tests with subtransactions
@@ -145,7 +145,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT first_sp;
@@ -166,7 +166,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 13.08.2006 12:34:56 PDT
+ 13.08.2006 12:34:56 -05
 (1 row)
 
 ROLLBACK TO first_sp;
@@ -179,7 +179,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT second_sp;
@@ -194,7 +194,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 08/13/2006 12:34:56 PDT
+ 08/13/2006 12:34:56 -05
 (1 row)
 
 SAVEPOINT third_sp;
@@ -215,7 +215,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 ROLLBACK TO third_sp;
@@ -234,7 +234,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
        timestamptz       
 -------------------------
- 08/13/2006 12:34:56 PDT
+ 08/13/2006 12:34:56 -05
 (1 row)
 
 ROLLBACK TO second_sp;
@@ -253,7 +253,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 ROLLBACK;
@@ -272,7 +272,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL with Savepoints
@@ -292,7 +292,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT sp;
@@ -313,7 +313,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 ROLLBACK TO sp;
@@ -332,7 +332,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 ROLLBACK;
@@ -351,7 +351,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET LOCAL persists through RELEASE (which was not true in 8.0-8.2)
@@ -371,7 +371,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 SAVEPOINT sp;
@@ -392,7 +392,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 RELEASE SAVEPOINT sp;
@@ -411,7 +411,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 ROLLBACK;
@@ -430,7 +430,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- SET followed by SET LOCAL
@@ -454,7 +454,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
          timestamptz          
 ------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+ Sun Aug 13 12:34:56 2006 -05
 (1 row)
 
 COMMIT;
@@ -473,7 +473,7 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 --
@@ -490,20 +490,20 @@
 SELECT '2006-08-13 12:34:56'::timestamptz;
       timestamptz       
 ------------------------
- 2006-08-13 12:34:56-07
+ 2006-08-13 12:34:56-05
 (1 row)
 
 RESET datestyle;
 SHOW datestyle;
-   DateStyle   
----------------
- Postgres, MDY
+ DateStyle 
+-----------
+ ISO, DMY
 (1 row)
 
 SELECT '2006-08-13 12:34:56'::timestamptz;
-         timestamptz          
-------------------------------
- Sun Aug 13 12:34:56 2006 PDT
+      timestamptz       
+------------------------
+ 2006-08-13 12:34:56-05
 (1 row)
 
 -- Test some simple error cases
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/foreign_data.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/foreign_data.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/foreign_data.out	2019-08-12 14:55:05.426230282 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/foreign_data.out	2019-09-05 16:24:14.654168859 -0500
@@ -728,7 +728,7 @@
  c3     | date    |           |          |         |                                | plain    |              | 
 Check constraints:
     "ft1_c2_check" CHECK (c2 <> ''::text)
-    "ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
+    "ft1_c3_check" CHECK (c3 >= '1994-01-01'::date AND c3 <= '1994-01-31'::date)
 Server: s0
 FDW options: (delimiter ',', quote '"', "be quoted" 'value')
 
@@ -849,7 +849,7 @@
  c10    | integer |           |          |         | (p1 'v1')                      | plain    |              | 
 Check constraints:
     "ft1_c2_check" CHECK (c2 <> ''::text)
-    "ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
+    "ft1_c3_check" CHECK (c3 >= '1994-01-01'::date AND c3 <= '1994-01-31'::date)
 Server: s0
 FDW options: (delimiter ',', quote '"', "be quoted" 'value')
 
@@ -897,7 +897,7 @@
  c10              | integer |           |          |         | (p1 'v1')
 Check constraints:
     "ft1_c2_check" CHECK (c2 <> ''::text)
-    "ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
+    "ft1_c3_check" CHECK (c3 >= '1994-01-01'::date AND c3 <= '1994-01-31'::date)
 Server: s0
 FDW options: (quote '~', "be quoted" 'value', escape '@')
 
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/window.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/window.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/window.out	2019-08-12 14:55:05.466233679 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/window.out	2019-09-05 16:24:15.146210828 -0500
@@ -1306,11 +1306,11 @@
 	SELECT i, min(i) over (order by i range between '1 day' preceding and '10 days' following) as min_i
   FROM generate_series(now(), now()+'100 days'::interval, '1 hour') i;
 SELECT pg_get_viewdef('v_window');
-                                                      pg_get_viewdef                                                       
----------------------------------------------------------------------------------------------------------------------------
-  SELECT i.i,                                                                                                             +
-     min(i.i) OVER (ORDER BY i.i RANGE BETWEEN '@ 1 day'::interval PRECEDING AND '@ 10 days'::interval FOLLOWING) AS min_i+
-    FROM generate_series(now(), (now() + '@ 100 days'::interval), '@ 1 hour'::interval) i(i);
+                                                    pg_get_viewdef                                                     
+-----------------------------------------------------------------------------------------------------------------------
+  SELECT i.i,                                                                                                         +
+     min(i.i) OVER (ORDER BY i.i RANGE BETWEEN '1 day'::interval PRECEDING AND '10 days'::interval FOLLOWING) AS min_i+
+    FROM generate_series(now(), (now() + '100 days'::interval), '01:00:00'::interval) i(i);
 (1 row)
 
 -- RANGE offset PRECEDING/FOLLOWING tests
@@ -1488,96 +1488,96 @@
 	salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 34900 |   5000 | 10-01-2006
- 34900 |   6000 | 10-01-2006
- 38400 |   3900 | 12-23-2006
- 47100 |   4800 | 08-01-2007
- 47100 |   5200 | 08-01-2007
- 47100 |   4800 | 08-08-2007
- 47100 |   5200 | 08-15-2007
- 36100 |   3500 | 12-10-2007
- 32200 |   4500 | 01-01-2008
- 32200 |   4200 | 01-01-2008
+ 34900 |   5000 | 2006-10-01
+ 34900 |   6000 | 2006-10-01
+ 38400 |   3900 | 2006-12-23
+ 47100 |   4800 | 2007-08-01
+ 47100 |   5200 | 2007-08-01
+ 47100 |   4800 | 2007-08-08
+ 47100 |   5200 | 2007-08-15
+ 36100 |   3500 | 2007-12-10
+ 32200 |   4500 | 2008-01-01
+ 32200 |   4200 | 2008-01-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date desc range between '1 year'::interval preceding and '1 year'::interval following),
 	salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 32200 |   4200 | 01-01-2008
- 32200 |   4500 | 01-01-2008
- 36100 |   3500 | 12-10-2007
- 47100 |   5200 | 08-15-2007
- 47100 |   4800 | 08-08-2007
- 47100 |   4800 | 08-01-2007
- 47100 |   5200 | 08-01-2007
- 38400 |   3900 | 12-23-2006
- 34900 |   5000 | 10-01-2006
- 34900 |   6000 | 10-01-2006
+ 32200 |   4200 | 2008-01-01
+ 32200 |   4500 | 2008-01-01
+ 36100 |   3500 | 2007-12-10
+ 47100 |   5200 | 2007-08-15
+ 47100 |   4800 | 2007-08-08
+ 47100 |   4800 | 2007-08-01
+ 47100 |   5200 | 2007-08-01
+ 38400 |   3900 | 2006-12-23
+ 34900 |   5000 | 2006-10-01
+ 34900 |   6000 | 2006-10-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date desc range between '1 year'::interval following and '1 year'::interval following),
 	salary, enroll_date from empsalary;
  sum | salary | enroll_date 
 -----+--------+-------------
-     |   4200 | 01-01-2008
-     |   4500 | 01-01-2008
-     |   3500 | 12-10-2007
-     |   5200 | 08-15-2007
-     |   4800 | 08-08-2007
-     |   4800 | 08-01-2007
-     |   5200 | 08-01-2007
-     |   3900 | 12-23-2006
-     |   5000 | 10-01-2006
-     |   6000 | 10-01-2006
+     |   4200 | 2008-01-01
+     |   4500 | 2008-01-01
+     |   3500 | 2007-12-10
+     |   5200 | 2007-08-15
+     |   4800 | 2007-08-08
+     |   4800 | 2007-08-01
+     |   5200 | 2007-08-01
+     |   3900 | 2006-12-23
+     |   5000 | 2006-10-01
+     |   6000 | 2006-10-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following
 	exclude current row), salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 29900 |   5000 | 10-01-2006
- 28900 |   6000 | 10-01-2006
- 34500 |   3900 | 12-23-2006
- 42300 |   4800 | 08-01-2007
- 41900 |   5200 | 08-01-2007
- 42300 |   4800 | 08-08-2007
- 41900 |   5200 | 08-15-2007
- 32600 |   3500 | 12-10-2007
- 27700 |   4500 | 01-01-2008
- 28000 |   4200 | 01-01-2008
+ 29900 |   5000 | 2006-10-01
+ 28900 |   6000 | 2006-10-01
+ 34500 |   3900 | 2006-12-23
+ 42300 |   4800 | 2007-08-01
+ 41900 |   5200 | 2007-08-01
+ 42300 |   4800 | 2007-08-08
+ 41900 |   5200 | 2007-08-15
+ 32600 |   3500 | 2007-12-10
+ 27700 |   4500 | 2008-01-01
+ 28000 |   4200 | 2008-01-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following
 	exclude group), salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 23900 |   5000 | 10-01-2006
- 23900 |   6000 | 10-01-2006
- 34500 |   3900 | 12-23-2006
- 37100 |   4800 | 08-01-2007
- 37100 |   5200 | 08-01-2007
- 42300 |   4800 | 08-08-2007
- 41900 |   5200 | 08-15-2007
- 32600 |   3500 | 12-10-2007
- 23500 |   4500 | 01-01-2008
- 23500 |   4200 | 01-01-2008
+ 23900 |   5000 | 2006-10-01
+ 23900 |   6000 | 2006-10-01
+ 34500 |   3900 | 2006-12-23
+ 37100 |   4800 | 2007-08-01
+ 37100 |   5200 | 2007-08-01
+ 42300 |   4800 | 2007-08-08
+ 41900 |   5200 | 2007-08-15
+ 32600 |   3500 | 2007-12-10
+ 23500 |   4500 | 2008-01-01
+ 23500 |   4200 | 2008-01-01
 (10 rows)
 
 select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following
 	exclude ties), salary, enroll_date from empsalary;
   sum  | salary | enroll_date 
 -------+--------+-------------
- 28900 |   5000 | 10-01-2006
- 29900 |   6000 | 10-01-2006
- 38400 |   3900 | 12-23-2006
- 41900 |   4800 | 08-01-2007
- 42300 |   5200 | 08-01-2007
- 47100 |   4800 | 08-08-2007
- 47100 |   5200 | 08-15-2007
- 36100 |   3500 | 12-10-2007
- 28000 |   4500 | 01-01-2008
- 27700 |   4200 | 01-01-2008
+ 28900 |   5000 | 2006-10-01
+ 29900 |   6000 | 2006-10-01
+ 38400 |   3900 | 2006-12-23
+ 41900 |   4800 | 2007-08-01
+ 42300 |   5200 | 2007-08-01
+ 47100 |   4800 | 2007-08-08
+ 47100 |   5200 | 2007-08-15
+ 36100 |   3500 | 2007-12-10
+ 28000 |   4500 | 2008-01-01
+ 27700 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by salary range between 1000 preceding and 1000 following),
@@ -1659,16 +1659,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        5000 |       5200 |   5000 | 10-01-2006
-        6000 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       4200 |   4500 | 01-01-2008
-        5000 |       4200 |   4200 | 01-01-2008
+        5000 |       5200 |   5000 | 2006-10-01
+        6000 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       4200 |   4500 | 2008-01-01
+        5000 |       4200 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following
@@ -1678,16 +1678,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        5000 |       5200 |   5000 | 10-01-2006
-        6000 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       4500 |   4500 | 01-01-2008
-        5000 |       4200 |   4200 | 01-01-2008
+        5000 |       5200 |   5000 | 2006-10-01
+        6000 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       4500 |   4500 | 2008-01-01
+        5000 |       4200 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following
@@ -1697,16 +1697,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        3900 |       5200 |   5000 | 10-01-2006
-        3900 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       3500 |   4500 | 01-01-2008
-        5000 |       3500 |   4200 | 01-01-2008
+        3900 |       5200 |   5000 | 2006-10-01
+        3900 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       3500 |   4500 | 2008-01-01
+        5000 |       3500 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following
@@ -1716,16 +1716,16 @@
 	salary, enroll_date from empsalary;
  first_value | last_value | salary | enroll_date 
 -------------+------------+--------+-------------
-        6000 |       5200 |   5000 | 10-01-2006
-        5000 |       5200 |   6000 | 10-01-2006
-        5000 |       3500 |   3900 | 12-23-2006
-        5000 |       4200 |   4800 | 08-01-2007
-        5000 |       4200 |   5200 | 08-01-2007
-        5000 |       4200 |   4800 | 08-08-2007
-        5000 |       4200 |   5200 | 08-15-2007
-        5000 |       4200 |   3500 | 12-10-2007
-        5000 |       4200 |   4500 | 01-01-2008
-        5000 |       4500 |   4200 | 01-01-2008
+        6000 |       5200 |   5000 | 2006-10-01
+        5000 |       5200 |   6000 | 2006-10-01
+        5000 |       3500 |   3900 | 2006-12-23
+        5000 |       4200 |   4800 | 2007-08-01
+        5000 |       4200 |   5200 | 2007-08-01
+        5000 |       4200 |   4800 | 2007-08-08
+        5000 |       4200 |   5200 | 2007-08-15
+        5000 |       4200 |   3500 | 2007-12-10
+        5000 |       4200 |   4500 | 2008-01-01
+        5000 |       4500 |   4200 | 2008-01-01
 (10 rows)
 
 -- RANGE offset PRECEDING/FOLLOWING with null values
@@ -2147,16 +2147,16 @@
              '1 year'::interval preceding and '1 year'::interval following);
  id | f_interval | first_value | last_value 
 ----+------------+-------------+------------
-  1 | @ 1 year   |           1 |          2
-  2 | @ 2 years  |           1 |          3
-  3 | @ 3 years  |           2 |          4
-  4 | @ 4 years  |           3 |          6
-  5 | @ 5 years  |           4 |          6
-  6 | @ 5 years  |           4 |          6
-  7 | @ 7 years  |           7 |          8
-  8 | @ 8 years  |           7 |          9
-  9 | @ 9 years  |           8 |         10
- 10 | @ 10 years |           9 |         10
+  1 | 1 year     |           1 |          2
+  2 | 2 years    |           1 |          3
+  3 | 3 years    |           2 |          4
+  4 | 4 years    |           3 |          6
+  5 | 5 years    |           4 |          6
+  6 | 5 years    |           4 |          6
+  7 | 7 years    |           7 |          8
+  8 | 8 years    |           7 |          9
+  9 | 9 years    |           8 |         10
+ 10 | 10 years   |           9 |         10
 (10 rows)
 
 select id, f_interval, first_value(id) over w, last_value(id) over w
@@ -2165,88 +2165,88 @@
              '1 year' preceding and '1 year' following);
  id | f_interval | first_value | last_value 
 ----+------------+-------------+------------
- 10 | @ 10 years |          10 |          9
-  9 | @ 9 years  |          10 |          8
-  8 | @ 8 years  |           9 |          7
-  7 | @ 7 years  |           8 |          7
-  6 | @ 5 years  |           6 |          4
-  5 | @ 5 years  |           6 |          4
-  4 | @ 4 years  |           6 |          3
-  3 | @ 3 years  |           4 |          2
-  2 | @ 2 years  |           3 |          1
-  1 | @ 1 year   |           2 |          1
+ 10 | 10 years   |          10 |          9
+  9 | 9 years    |          10 |          8
+  8 | 8 years    |           9 |          7
+  7 | 7 years    |           8 |          7
+  6 | 5 years    |           6 |          4
+  5 | 5 years    |           6 |          4
+  4 | 4 years    |           6 |          3
+  3 | 3 years    |           4 |          2
+  2 | 2 years    |           3 |          1
+  1 | 1 year     |           2 |          1
 (10 rows)
 
 select id, f_timestamptz, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamptz range between
              '1 year'::interval preceding and '1 year'::interval following);
- id |        f_timestamptz         | first_value | last_value 
-----+------------------------------+-------------+------------
-  1 | Thu Oct 19 02:23:54 2000 PDT |           1 |          3
-  2 | Fri Oct 19 02:23:54 2001 PDT |           1 |          4
-  3 | Fri Oct 19 02:23:54 2001 PDT |           1 |          4
-  4 | Sat Oct 19 02:23:54 2002 PDT |           2 |          5
-  5 | Sun Oct 19 02:23:54 2003 PDT |           4 |          6
-  6 | Tue Oct 19 02:23:54 2004 PDT |           5 |          7
-  7 | Wed Oct 19 02:23:54 2005 PDT |           6 |          8
-  8 | Thu Oct 19 02:23:54 2006 PDT |           7 |          9
-  9 | Fri Oct 19 02:23:54 2007 PDT |           8 |         10
- 10 | Sun Oct 19 02:23:54 2008 PDT |           9 |         10
+ id |     f_timestamptz      | first_value | last_value 
+----+------------------------+-------------+------------
+  1 | 2000-10-19 04:23:54-05 |           1 |          3
+  2 | 2001-10-19 04:23:54-05 |           1 |          4
+  3 | 2001-10-19 04:23:54-05 |           1 |          4
+  4 | 2002-10-19 04:23:54-05 |           2 |          5
+  5 | 2003-10-19 04:23:54-05 |           4 |          6
+  6 | 2004-10-19 04:23:54-05 |           5 |          7
+  7 | 2005-10-19 04:23:54-05 |           6 |          8
+  8 | 2006-10-19 04:23:54-05 |           7 |          9
+  9 | 2007-10-19 04:23:54-05 |           8 |         10
+ 10 | 2008-10-19 04:23:54-05 |           9 |         10
 (10 rows)
 
 select id, f_timestamptz, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamptz desc range between
              '1 year' preceding and '1 year' following);
- id |        f_timestamptz         | first_value | last_value 
-----+------------------------------+-------------+------------
- 10 | Sun Oct 19 02:23:54 2008 PDT |          10 |          9
-  9 | Fri Oct 19 02:23:54 2007 PDT |          10 |          8
-  8 | Thu Oct 19 02:23:54 2006 PDT |           9 |          7
-  7 | Wed Oct 19 02:23:54 2005 PDT |           8 |          6
-  6 | Tue Oct 19 02:23:54 2004 PDT |           7 |          5
-  5 | Sun Oct 19 02:23:54 2003 PDT |           6 |          4
-  4 | Sat Oct 19 02:23:54 2002 PDT |           5 |          2
-  3 | Fri Oct 19 02:23:54 2001 PDT |           4 |          1
-  2 | Fri Oct 19 02:23:54 2001 PDT |           4 |          1
-  1 | Thu Oct 19 02:23:54 2000 PDT |           3 |          1
+ id |     f_timestamptz      | first_value | last_value 
+----+------------------------+-------------+------------
+ 10 | 2008-10-19 04:23:54-05 |          10 |          9
+  9 | 2007-10-19 04:23:54-05 |          10 |          8
+  8 | 2006-10-19 04:23:54-05 |           9 |          7
+  7 | 2005-10-19 04:23:54-05 |           8 |          6
+  6 | 2004-10-19 04:23:54-05 |           7 |          5
+  5 | 2003-10-19 04:23:54-05 |           6 |          4
+  4 | 2002-10-19 04:23:54-05 |           5 |          2
+  3 | 2001-10-19 04:23:54-05 |           4 |          1
+  2 | 2001-10-19 04:23:54-05 |           4 |          1
+  1 | 2000-10-19 04:23:54-05 |           3 |          1
 (10 rows)
 
 select id, f_timestamp, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamp range between
              '1 year'::interval preceding and '1 year'::interval following);
- id |       f_timestamp        | first_value | last_value 
-----+--------------------------+-------------+------------
-  1 | Thu Oct 19 10:23:54 2000 |           1 |          3
-  2 | Fri Oct 19 10:23:54 2001 |           1 |          4
-  3 | Fri Oct 19 10:23:54 2001 |           1 |          4
-  4 | Sat Oct 19 10:23:54 2002 |           2 |          5
-  5 | Sun Oct 19 10:23:54 2003 |           4 |          6
-  6 | Tue Oct 19 10:23:54 2004 |           5 |          7
-  7 | Wed Oct 19 10:23:54 2005 |           6 |          8
-  8 | Thu Oct 19 10:23:54 2006 |           7 |          9
-  9 | Fri Oct 19 10:23:54 2007 |           8 |         10
- 10 | Sun Oct 19 10:23:54 2008 |           9 |         10
+ id |     f_timestamp     | first_value | last_value 
+----+---------------------+-------------+------------
+  1 | 2000-10-19 10:23:54 |           1 |          3
+  2 | 2001-10-19 10:23:54 |           1 |          4
+  3 | 2001-10-19 10:23:54 |           1 |          4
+  4 | 2002-10-19 10:23:54 |           2 |          5
+  5 | 2003-10-19 10:23:54 |           4 |          6
+  6 | 2004-10-19 10:23:54 |           5 |          7
+  7 | 2005-10-19 10:23:54 |           6 |          8
+  8 | 2006-10-19 10:23:54 |           7 |          9
+  9 | 2007-10-19 10:23:54 |           8 |         10
+ 10 | 2008-10-19 10:23:54 |           9 |         10
 (10 rows)
 
 select id, f_timestamp, first_value(id) over w, last_value(id) over w
 from datetimes
 window w as (order by f_timestamp desc range between
              '1 year' preceding and '1 year' following);
- id |       f_timestamp        | first_value | last_value 
-----+--------------------------+-------------+------------
- 10 | Sun Oct 19 10:23:54 2008 |          10 |          9
-  9 | Fri Oct 19 10:23:54 2007 |          10 |          8
-  8 | Thu Oct 19 10:23:54 2006 |           9 |          7
-  7 | Wed Oct 19 10:23:54 2005 |           8 |          6
-  6 | Tue Oct 19 10:23:54 2004 |           7 |          5
-  5 | Sun Oct 19 10:23:54 2003 |           6 |          4
-  4 | Sat Oct 19 10:23:54 2002 |           5 |          2
-  3 | Fri Oct 19 10:23:54 2001 |           4 |          1
-  2 | Fri Oct 19 10:23:54 2001 |           4 |          1
-  1 | Thu Oct 19 10:23:54 2000 |           3 |          1
+ id |     f_timestamp     | first_value | last_value 
+----+---------------------+-------------+------------
+ 10 | 2008-10-19 10:23:54 |          10 |          9
+  9 | 2007-10-19 10:23:54 |          10 |          8
+  8 | 2006-10-19 10:23:54 |           9 |          7
+  7 | 2005-10-19 10:23:54 |           8 |          6
+  6 | 2004-10-19 10:23:54 |           7 |          5
+  5 | 2003-10-19 10:23:54 |           6 |          4
+  4 | 2002-10-19 10:23:54 |           5 |          2
+  3 | 2001-10-19 10:23:54 |           4 |          1
+  2 | 2001-10-19 10:23:54 |           4 |          1
+  1 | 2000-10-19 10:23:54 |           3 |          1
 (10 rows)
 
 -- RANGE offset PRECEDING/FOLLOWING error cases
@@ -2565,16 +2565,16 @@
 	salary, enroll_date from empsalary;
  first_value | lead | nth_value | salary | enroll_date 
 -------------+------+-----------+--------+-------------
-        5000 | 6000 |      5000 |   5000 | 10-01-2006
-        5000 | 3900 |      5000 |   6000 | 10-01-2006
-        5000 | 4800 |      5000 |   3900 | 12-23-2006
-        3900 | 5200 |      3900 |   4800 | 08-01-2007
-        3900 | 4800 |      3900 |   5200 | 08-01-2007
-        4800 | 5200 |      4800 |   4800 | 08-08-2007
-        4800 | 3500 |      4800 |   5200 | 08-15-2007
-        5200 | 4500 |      5200 |   3500 | 12-10-2007
-        3500 | 4200 |      3500 |   4500 | 01-01-2008
-        3500 |      |      3500 |   4200 | 01-01-2008
+        5000 | 6000 |      5000 |   5000 | 2006-10-01
+        5000 | 3900 |      5000 |   6000 | 2006-10-01
+        5000 | 4800 |      5000 |   3900 | 2006-12-23
+        3900 | 5200 |      3900 |   4800 | 2007-08-01
+        3900 | 4800 |      3900 |   5200 | 2007-08-01
+        4800 | 5200 |      4800 |   4800 | 2007-08-08
+        4800 | 3500 |      4800 |   5200 | 2007-08-15
+        5200 | 4500 |      5200 |   3500 | 2007-12-10
+        3500 | 4200 |      3500 |   4500 | 2008-01-01
+        3500 |      |      3500 |   4200 | 2008-01-01
 (10 rows)
 
 select last_value(salary) over(order by enroll_date groups between 1 preceding and 1 following),
@@ -2582,16 +2582,16 @@
 	salary, enroll_date from empsalary;
  last_value | lag  | salary | enroll_date 
 ------------+------+--------+-------------
-       3900 |      |   5000 | 10-01-2006
-       3900 | 5000 |   6000 | 10-01-2006
-       5200 | 6000 |   3900 | 12-23-2006
-       4800 | 3900 |   4800 | 08-01-2007
-       4800 | 4800 |   5200 | 08-01-2007
-       5200 | 5200 |   4800 | 08-08-2007
-       3500 | 4800 |   5200 | 08-15-2007
-       4200 | 5200 |   3500 | 12-10-2007
-       4200 | 3500 |   4500 | 01-01-2008
-       4200 | 4500 |   4200 | 01-01-2008
+       3900 |      |   5000 | 2006-10-01
+       3900 | 5000 |   6000 | 2006-10-01
+       5200 | 6000 |   3900 | 2006-12-23
+       4800 | 3900 |   4800 | 2007-08-01
+       4800 | 4800 |   5200 | 2007-08-01
+       5200 | 5200 |   4800 | 2007-08-08
+       3500 | 4800 |   5200 | 2007-08-15
+       4200 | 5200 |   3500 | 2007-12-10
+       4200 | 3500 |   4500 | 2008-01-01
+       4200 | 4500 |   4200 | 2008-01-01
 (10 rows)
 
 select first_value(salary) over(order by enroll_date groups between 1 following and 3 following
@@ -2602,16 +2602,16 @@
 	salary, enroll_date from empsalary;
  first_value | lead | nth_value | salary | enroll_date 
 -------------+------+-----------+--------+-------------
-        3900 | 6000 |      3900 |   5000 | 10-01-2006
-        3900 | 3900 |      3900 |   6000 | 10-01-2006
-        4800 | 4800 |      4800 |   3900 | 12-23-2006
-        4800 | 5200 |      4800 |   4800 | 08-01-2007
-        4800 | 4800 |      4800 |   5200 | 08-01-2007
-        5200 | 5200 |      5200 |   4800 | 08-08-2007
-        3500 | 3500 |      3500 |   5200 | 08-15-2007
-        4500 | 4500 |      4500 |   3500 | 12-10-2007
-             | 4200 |           |   4500 | 01-01-2008
-             |      |           |   4200 | 01-01-2008
+        3900 | 6000 |      3900 |   5000 | 2006-10-01
+        3900 | 3900 |      3900 |   6000 | 2006-10-01
+        4800 | 4800 |      4800 |   3900 | 2006-12-23
+        4800 | 5200 |      4800 |   4800 | 2007-08-01
+        4800 | 4800 |      4800 |   5200 | 2007-08-01
+        5200 | 5200 |      5200 |   4800 | 2007-08-08
+        3500 | 3500 |      3500 |   5200 | 2007-08-15
+        4500 | 4500 |      4500 |   3500 | 2007-12-10
+             | 4200 |           |   4500 | 2008-01-01
+             |      |           |   4200 | 2008-01-01
 (10 rows)
 
 select last_value(salary) over(order by enroll_date groups between 1 following and 3 following
@@ -2620,16 +2620,16 @@
 	salary, enroll_date from empsalary;
  last_value | lag  | salary | enroll_date 
 ------------+------+--------+-------------
-       4800 |      |   5000 | 10-01-2006
-       4800 | 5000 |   6000 | 10-01-2006
-       5200 | 6000 |   3900 | 12-23-2006
-       3500 | 3900 |   4800 | 08-01-2007
-       3500 | 4800 |   5200 | 08-01-2007
-       4200 | 5200 |   4800 | 08-08-2007
-       4200 | 4800 |   5200 | 08-15-2007
-       4200 | 5200 |   3500 | 12-10-2007
-            | 3500 |   4500 | 01-01-2008
-            | 4500 |   4200 | 01-01-2008
+       4800 |      |   5000 | 2006-10-01
+       4800 | 5000 |   6000 | 2006-10-01
+       5200 | 6000 |   3900 | 2006-12-23
+       3500 | 3900 |   4800 | 2007-08-01
+       3500 | 4800 |   5200 | 2007-08-01
+       4200 | 5200 |   4800 | 2007-08-08
+       4200 | 4800 |   5200 | 2007-08-15
+       4200 | 5200 |   3500 | 2007-12-10
+            | 3500 |   4500 | 2008-01-01
+            | 4500 |   4200 | 2008-01-01
 (10 rows)
 
 -- Show differences in offset interpretation between ROWS, RANGE, and GROUPS
@@ -3382,8 +3382,8 @@
   FROM (VALUES(1,'1 sec'),(2,'2 sec'),(3,NULL),(4,NULL)) t(i,v);
  i |    avg     
 ---+------------
- 1 | @ 1.5 secs
- 2 | @ 2 secs
+ 1 | 00:00:01.5
+ 2 | 00:00:02
  3 | 
  4 | 
 (4 rows)
@@ -3432,8 +3432,8 @@
   FROM (VALUES(1,'1 sec'),(2,'2 sec'),(3,NULL),(4,NULL)) t(i,v);
  i |   sum    
 ---+----------
- 1 | @ 3 secs
- 2 | @ 2 secs
+ 1 | 00:00:03
+ 2 | 00:00:02
  3 | 
  4 | 
 (4 rows)
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/json.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/json.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/json.out	2019-09-02 18:21:49.555379953 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/json.out	2019-09-05 16:24:16.358314214 -0500
@@ -1351,9 +1351,9 @@
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 select * from json_populate_record(null::jpop,'{"a":"blurfl","x":43.2}') q;
@@ -1363,9 +1363,9 @@
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 select * from json_populate_record(null::jpop,'{"a":[100,200,false],"x":43.2}') q;
@@ -1375,17 +1375,17 @@
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"a":[100,200,false],"x":43.2}') q;
-        a        | b |            c             
------------------+---+--------------------------
- [100,200,false] | 3 | Mon Dec 31 15:30:56 2012
+        a        | b |          c          
+-----------------+---+---------------------
+ [100,200,false] | 3 | 2012-12-31 15:30:56
 (1 row)
 
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{"c":[100,200,false],"x":43.2}') q;
 ERROR:  invalid input syntax for type timestamp: "[100,200,false]"
 select * from json_populate_record(row('x',3,'2012-12-31 15:30:56')::jpop,'{}') q;
- a | b |            c             
----+---+--------------------------
- x | 3 | Mon Dec 31 15:30:56 2012
+ a | b |          c          
+---+---+---------------------
+ x | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT i FROM json_populate_record(NULL::jsrec_i_not_null, '{"x": 43.2}') q;
@@ -1702,15 +1702,15 @@
 SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": [1, 2]}') q;
 ERROR:  cannot call populate_composite on an array
 SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}') q;
-                rec                
------------------------------------
- (abc,,"Thu Jan 02 00:00:00 2003")
+             rec              
+------------------------------
+ (abc,,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": "(abc,42,01.02.2003)"}') q;
-                 rec                 
--------------------------------------
- (abc,42,"Thu Jan 02 00:00:00 2003")
+              rec               
+--------------------------------
+ (abc,42,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": 123}') q;
@@ -1719,21 +1719,21 @@
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": [1, 2]}') q;
 ERROR:  cannot call populate_composite on a scalar
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q;
-                          reca                          
---------------------------------------------------------
- {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+                       reca                        
+---------------------------------------------------
+ {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": ["(abc,42,01.02.2003)"]}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": "{\"(abc,42,01.02.2003)\"}"}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT rec FROM json_populate_record(
@@ -1741,9 +1741,9 @@
 		row('x',3,'2012-12-31 15:30:56')::jpop,NULL)::jsrec,
 	'{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}'
 ) q;
-                rec                 
-------------------------------------
- (abc,3,"Thu Jan 02 00:00:00 2003")
+              rec              
+-------------------------------
+ (abc,3,"2003-02-01 00:00:00")
 (1 row)
 
 -- anonymous record type
@@ -1780,38 +1780,38 @@
 ERROR:  value for domain j_ordered_pair violates check constraint "j_ordered_pair_check"
 -- populate_recordset
 select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-       a       | b  |            c             
----------------+----+--------------------------
+       a       | b  |          c          
+---------------+----+---------------------
  [100,200,300] | 99 | 
- {"z":true}    |  3 | Fri Jan 20 10:42:53 2012
+ {"z":true}    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"c":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
@@ -1824,24 +1824,24 @@
 (1 row)
 
 select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-       a       | b  |            c             
----------------+----+--------------------------
+       a       | b  |          c          
+---------------+----+---------------------
  [100,200,300] | 99 | 
- {"z":true}    |  3 | Fri Jan 20 10:42:53 2012
+ {"z":true}    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 -- anonymous record type
@@ -1930,11 +1930,11 @@
 }'::json
 FROM generate_series(1, 3);
 SELECT (json_populate_record(NULL::jsrec, js)).* FROM jspoptest;
- i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |                rec                |                          reca                          
----+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+-----------------------------------+--------------------------------------------------------
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+ i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |             rec              |                       reca                        
+---+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+------------------------------+---------------------------------------------------
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (3 rows)
 
 DROP TYPE jsrec;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/jsonb.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/jsonb.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/jsonb.out	2019-09-02 18:21:49.555379953 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/jsonb.out	2019-09-05 16:24:16.962365736 -0500
@@ -2040,9 +2040,9 @@
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT * FROM jsonb_populate_record(NULL::jbpop,'{"a":"blurfl","x":43.2}') q;
@@ -2052,9 +2052,9 @@
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"a":"blurfl","x":43.2}') q;
-   a    | b |            c             
---------+---+--------------------------
- blurfl | 3 | Mon Dec 31 15:30:56 2012
+   a    | b |          c          
+--------+---+---------------------
+ blurfl | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT * FROM jsonb_populate_record(NULL::jbpop,'{"a":[100,200,false],"x":43.2}') q;
@@ -2064,17 +2064,17 @@
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"a":[100,200,false],"x":43.2}') q;
-         a         | b |            c             
--------------------+---+--------------------------
- [100, 200, false] | 3 | Mon Dec 31 15:30:56 2012
+         a         | b |          c          
+-------------------+---+---------------------
+ [100, 200, false] | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop,'{"c":[100,200,false],"x":43.2}') q;
 ERROR:  invalid input syntax for type timestamp: "[100, 200, false]"
 SELECT * FROM jsonb_populate_record(row('x',3,'2012-12-31 15:30:56')::jbpop, '{}') q;
- a | b |            c             
----+---+--------------------------
- x | 3 | Mon Dec 31 15:30:56 2012
+ a | b |          c          
+---+---+---------------------
+ x | 3 | 2012-12-31 15:30:56
 (1 row)
 
 SELECT i FROM jsonb_populate_record(NULL::jsbrec_i_not_null, '{"x": 43.2}') q;
@@ -2391,15 +2391,15 @@
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": [1, 2]}') q;
 ERROR:  cannot call populate_composite on an array
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}') q;
-                rec                
------------------------------------
- (abc,,"Thu Jan 02 00:00:00 2003")
+             rec              
+------------------------------
+ (abc,,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": "(abc,42,01.02.2003)"}') q;
-                 rec                 
--------------------------------------
- (abc,42,"Thu Jan 02 00:00:00 2003")
+              rec               
+--------------------------------
+ (abc,42,"2003-02-01 00:00:00")
 (1 row)
 
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": 123}') q;
@@ -2408,21 +2408,21 @@
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [1, 2]}') q;
 ERROR:  cannot call populate_composite on a scalar
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q;
-                          reca                          
---------------------------------------------------------
- {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+                       reca                        
+---------------------------------------------------
+ {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": ["(abc,42,01.02.2003)"]}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": "{\"(abc,42,01.02.2003)\"}"}') q;
-                   reca                    
--------------------------------------------
- {"(abc,42,\"Thu Jan 02 00:00:00 2003\")"}
+                 reca                 
+--------------------------------------
+ {"(abc,42,\"2003-02-01 00:00:00\")"}
 (1 row)
 
 SELECT rec FROM jsonb_populate_record(
@@ -2430,9 +2430,9 @@
 		row('x',3,'2012-12-31 15:30:56')::jbpop,NULL)::jsbrec,
 	'{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}'
 ) q;
-                rec                 
-------------------------------------
- (abc,3,"Thu Jan 02 00:00:00 2003")
+              rec              
+-------------------------------
+ (abc,3,"2003-02-01 00:00:00")
 (1 row)
 
 -- anonymous record type
@@ -2469,61 +2469,61 @@
 ERROR:  value for domain jb_ordered_pair violates check constraint "jb_ordered_pair_check"
 -- populate_recordset
 SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-        a        | b  |            c             
------------------+----+--------------------------
+        a        | b  |          c          
+-----------------+----+---------------------
  [100, 200, 300] | 99 | 
- {"z": true}     |  3 | Fri Jan 20 10:42:53 2012
+ {"z": true}     |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"c":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
 ERROR:  invalid input syntax for type timestamp: "[100, 200, 300]"
 SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b |            c             
---------+---+--------------------------
+   a    | b |          c          
+--------+---+---------------------
  blurfl |   | 
-        | 3 | Fri Jan 20 10:42:53 2012
+        | 3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q;
-   a    | b  |            c             
---------+----+--------------------------
+   a    | b  |          c          
+--------+----+---------------------
  blurfl | 99 | 
- def    |  3 | Fri Jan 20 10:42:53 2012
+ def    |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q;
-        a        | b  |            c             
------------------+----+--------------------------
+        a        | b  |          c          
+-----------------+----+---------------------
  [100, 200, 300] | 99 | 
- {"z": true}     |  3 | Fri Jan 20 10:42:53 2012
+ {"z": true}     |  3 | 2012-01-20 10:42:53
 (2 rows)
 
 -- anonymous record type
@@ -2725,11 +2725,11 @@
 }'::jsonb
 FROM generate_series(1, 3);
 SELECT (jsonb_populate_record(NULL::jsbrec, js)).* FROM jsbpoptest;
- i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |                rec                |                          reca                          
----+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+-----------------------------------+--------------------------------------------------------
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
-   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"Thu Jan 02 00:00:00 2003") | {"(abc,456,)",NULL,"(,,\"Thu Jan 02 00:00:00 2003\")"}
+ i | ia | ia1 | ia2 | ia3 | ia1d | ia2d | t | ta | c | ca | ts | js | jsb |        jsa         |             rec              |                       reca                        
+---+----+-----+-----+-----+------+------+---+----+---+----+----+----+-----+--------------------+------------------------------+---------------------------------------------------
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
+   |    |     |     |     |      |      |   |    |   |    |    |    |     | {1,"\"2\"",NULL,4} | (abc,,"2003-02-01 00:00:00") | {"(abc,456,)",NULL,"(,,\"2003-02-01 00:00:00\")"}
 (3 rows)
 
 DROP TYPE jsbrec;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/plpgsql.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/plpgsql.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/plpgsql.out	2019-08-12 14:55:05.446231980 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/plpgsql.out	2019-09-05 16:24:20.438662235 -0500
@@ -4210,33 +4210,33 @@
 select cast_invoker(20150717);
  cast_invoker 
 --------------
- 07-17-2015
+ 2015-07-17
 (1 row)
 
 select cast_invoker(20150718);  -- second call crashed in pre-release 9.5
  cast_invoker 
 --------------
- 07-18-2015
+ 2015-07-18
 (1 row)
 
 begin;
 select cast_invoker(20150717);
  cast_invoker 
 --------------
- 07-17-2015
+ 2015-07-17
 (1 row)
 
 select cast_invoker(20150718);
  cast_invoker 
 --------------
- 07-18-2015
+ 2015-07-18
 (1 row)
 
 savepoint s1;
 select cast_invoker(20150718);
  cast_invoker 
 --------------
- 07-18-2015
+ 2015-07-18
 (1 row)
 
 select cast_invoker(-1); -- fails
@@ -4247,13 +4247,13 @@
 select cast_invoker(20150719);
  cast_invoker 
 --------------
- 07-19-2015
+ 2015-07-19
 (1 row)
 
 select cast_invoker(20150720);
  cast_invoker 
 --------------
- 07-20-2015
+ 2015-07-20
 (1 row)
 
 commit;
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/alter_table.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/alter_table.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/alter_table.out	2019-08-12 14:55:15.915120765 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/alter_table.out	2019-09-05 16:24:28.215325474 -0500
@@ -48,9 +48,9 @@
 	'(0,2,4.1,4.1,3.1,3.1)', '(4.1,4.1,3.1,3.1)',
 	'epoch', '01:00:10', '{1.0,2.0,3.0,4.0}', '{1.0,2.0,3.0,4.0}', '{1,2,3,4}');
 SELECT * FROM attmp;
- initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |            v             |        w         |     x     |     y     |     z     
----------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+--------------------------+------------------+-----------+-----------+-----------
-         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | Thu Jan 01 00:00:00 1970 | @ 1 hour 10 secs | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
+ initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |          v          |    w     |     x     |     y     |     z     
+---------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+---------------------+----------+-----------+-----------+-----------
+         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | 1970-01-01 00:00:00 | 01:00:10 | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
 (1 row)
 
 DROP TABLE attmp;
@@ -90,9 +90,9 @@
 	'(0,2,4.1,4.1,3.1,3.1)', '(4.1,4.1,3.1,3.1)',
 	'epoch', '01:00:10', '{1.0,2.0,3.0,4.0}', '{1.0,2.0,3.0,4.0}', '{1,2,3,4}');
 SELECT * FROM attmp;
- initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |            v             |        w         |     x     |     y     |     z     
----------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+--------------------------+------------------+-----------+-----------+-----------
-         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | Thu Jan 01 00:00:00 1970 | @ 1 hour 10 secs | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
+ initial | a |  b   |  c   |  d  |  e  | f |           g           | i |   k    |   l   |  m  |        n        | p |     q     |           r           |              s              |          t          |          v          |    w     |     x     |     y     |     z     
+---------+---+------+------+-----+-----+---+-----------------------+---+--------+-------+-----+-----------------+---+-----------+-----------------------+-----------------------------+---------------------+---------------------+----------+-----------+-----------+-----------
+         | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | c | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | t | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | 1970-01-01 00:00:00 | 01:00:10 | {1,2,3,4} | {1,2,3,4} | {1,2,3,4}
 (1 row)
 
 CREATE INDEX attmp_idx ON attmp (a, (d + e), b);
@@ -541,11 +541,11 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2011
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
 (7 rows)
 
 create table nv_child_2009 (check (d between '2009-01-01'::date and '2009-12-31'::date)) inherits (nv_parent);
@@ -554,11 +554,11 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
    ->  Seq Scan on nv_child_2011
-         Filter: ((d >= '08-01-2011'::date) AND (d <= '08-31-2011'::date))
+         Filter: ((d >= '2011-08-01'::date) AND (d <= '2011-08-31'::date))
 (7 rows)
 
 explain (costs off) select * from nv_parent where d between '2009-08-01'::date and '2009-08-31'::date;
@@ -566,13 +566,13 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2011
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2009
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
 (9 rows)
 
 -- after validation, the constraint should be used
@@ -582,11 +582,11 @@
 ---------------------------------------------------------------------------
  Append
    ->  Seq Scan on nv_parent
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2010
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
    ->  Seq Scan on nv_child_2009
-         Filter: ((d >= '08-01-2009'::date) AND (d <= '08-31-2009'::date))
+         Filter: ((d >= '2009-08-01'::date) AND (d <= '2009-08-31'::date))
 (7 rows)
 
 -- add an inherited NOT VALID constraint
@@ -597,8 +597,8 @@
 --------+------+-----------+----------+---------
  d      | date |           |          | 
 Check constraints:
-    "nv_child_2009_d_check" CHECK (d >= '01-01-2009'::date AND d <= '12-31-2009'::date)
-    "nv_parent_d_check" CHECK (d >= '01-01-2001'::date AND d <= '12-31-2099'::date) NOT VALID
+    "nv_child_2009_d_check" CHECK (d >= '2009-01-01'::date AND d <= '2009-12-31'::date)
+    "nv_parent_d_check" CHECK (d >= '2001-01-01'::date AND d <= '2099-12-31'::date) NOT VALID
 Inherits: nv_parent
 
 -- we leave nv_parent and children around to help test pg_dump logic
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/polymorphism.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/polymorphism.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/polymorphism.out	2019-07-12 13:20:36.225289250 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/polymorphism.out	2019-09-05 16:24:28.935386882 -0500
@@ -1027,7 +1027,7 @@
 select dfunc(to_date('20081215','YYYYMMDD'));
        dfunc       
 -------------------
- Hello, 12-15-2008
+ Hello, 2008-12-15
 (1 row)
 
 select dfunc('City'::text);
@@ -1202,31 +1202,31 @@
 select (dfunc('Hello World', 20, '2009-07-25'::date)).*;
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', 20, '2009-07-25'::date);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc(c := '2009-07-25'::date, a := 'Hello World', b := 20);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', b := 20, c := '2009-07-25'::date);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', c := '2009-07-25'::date, b := 20);
       a      | b  |     c      
 -------------+----+------------
- Hello World | 20 | 07-25-2009
+ Hello World | 20 | 2009-07-25
 (1 row)
 
 select * from dfunc('Hello World', c := 20, b := '2009-07-25'::date);  -- fail
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rowtypes.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rowtypes.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/rowtypes.out	2019-08-12 14:55:05.454232660 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/rowtypes.out	2019-09-05 16:24:29.251413833 -0500
@@ -95,7 +95,7 @@
 select * from people;
      fn     |     bd     
 ------------+------------
- (Joe,Blow) | 01-10-1984
+ (Joe,Blow) | 1984-01-10
 (1 row)
 
 -- at the moment this will not work due to ALTER TABLE inadequacy:
@@ -106,7 +106,7 @@
 select * from people;
      fn      |     bd     
 -------------+------------
- (Joe,Blow,) | 01-10-1984
+ (Joe,Blow,) | 1984-01-10
 (1 row)
 
 -- test insertion/updating of subfields
@@ -114,7 +114,7 @@
 select * from people;
       fn       |     bd     
 ---------------+------------
- (Joe,Blow,Jr) | 01-10-1984
+ (Joe,Blow,Jr) | 1984-01-10
 (1 row)
 
 insert into quadtable (f1, q.c1.r, q.c2.i) values(44,55,66);
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/partition_prune.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/partition_prune.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/partition_prune.out	2019-08-12 14:55:15.923121444 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/partition_prune.out	2019-09-05 16:24:34.123829343 -0500
@@ -3149,12 +3149,12 @@
 -- timestamp < timestamptz comparison is only stable, not immutable
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning where a < '2000-02-01'::timestamptz;
-                                   QUERY PLAN                                   
---------------------------------------------------------------------------------
+                                QUERY PLAN                                
+--------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning1 (actual rows=0 loops=1)
-         Filter: (a < 'Tue Feb 01 00:00:00 2000 PST'::timestamp with time zone)
+         Filter: (a < '2000-02-01 00:00:00-05'::timestamp with time zone)
 (4 rows)
 
 -- check ScalarArrayOp cases
@@ -3170,43 +3170,43 @@
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2000-02-01', '2010-01-01']::timestamp[]);
-                                                   QUERY PLAN                                                   
-----------------------------------------------------------------------------------------------------------------
+                                              QUERY PLAN                                              
+------------------------------------------------------------------------------------------------------
  Seq Scan on stable_qual_pruning2 (actual rows=0 loops=1)
-   Filter: (a = ANY ('{"Tue Feb 01 00:00:00 2000","Fri Jan 01 00:00:00 2010"}'::timestamp without time zone[]))
+   Filter: (a = ANY ('{"2000-02-01 00:00:00","2010-01-01 00:00:00"}'::timestamp without time zone[]))
 (2 rows)
 
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2000-02-01', localtimestamp]::timestamp[]);
-                                                 QUERY PLAN                                                 
-------------------------------------------------------------------------------------------------------------
+                                              QUERY PLAN                                               
+-------------------------------------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning2 (actual rows=0 loops=1)
-         Filter: (a = ANY (ARRAY['Tue Feb 01 00:00:00 2000'::timestamp without time zone, LOCALTIMESTAMP]))
+         Filter: (a = ANY (ARRAY['2000-02-01 00:00:00'::timestamp without time zone, LOCALTIMESTAMP]))
 (4 rows)
 
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2010-02-01', '2020-01-01']::timestamptz[]);
-                                                        QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
+                                                  QUERY PLAN                                                   
+---------------------------------------------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning1 (never executed)
-         Filter: (a = ANY ('{"Mon Feb 01 00:00:00 2010 PST","Wed Jan 01 00:00:00 2020 PST"}'::timestamp with time zone[]))
+         Filter: (a = ANY ('{"2010-02-01 00:00:00-05","2020-01-01 00:00:00-05"}'::timestamp with time zone[]))
 (4 rows)
 
 explain (analyze, costs off, summary off, timing off)
 select * from stable_qual_pruning
   where a = any(array['2000-02-01', '2010-01-01']::timestamptz[]);
-                                                        QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
+                                                  QUERY PLAN                                                   
+---------------------------------------------------------------------------------------------------------------
  Append (actual rows=0 loops=1)
    Subplans Removed: 2
    ->  Seq Scan on stable_qual_pruning2 (actual rows=0 loops=1)
-         Filter: (a = ANY ('{"Tue Feb 01 00:00:00 2000 PST","Fri Jan 01 00:00:00 2010 PST"}'::timestamp with time zone[]))
+         Filter: (a = ANY ('{"2000-02-01 00:00:00-05","2010-01-01 00:00:00-05"}'::timestamp with time zone[]))
 (4 rows)
 
 explain (analyze, costs off, summary off, timing off)
diff -U3 /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/fast_default.out /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/fast_default.out
--- /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/expected/fast_default.out	2019-07-12 13:20:36.197291926 -0500
+++ /home/jcasanov/Documentos/pgdg/projects/postgresql/src/test/regress/results/fast_default.out	2019-09-05 16:24:40.080337270 -0500
@@ -126,36 +126,36 @@
        c_hugetext = repeat('abcdefg',1000) as c_hugetext_origdef,
        c_hugetext = repeat('poiuyt', 1000) as c_hugetext_newdef
 FROM T ORDER BY pk;
- pk | c_int | c_bpchar | c_text |   c_date   |       c_timestamp        |     c_timestamp_null     |         c_array          | c_small | c_small_null |       c_big       |       c_num       |  c_time  | c_interval | c_hugetext_origdef | c_hugetext_newdef 
-----+-------+----------+--------+------------+--------------------------+--------------------------+--------------------------+---------+--------------+-------------------+-------------------+----------+------------+--------------------+-------------------
-  1 |     1 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  2 |     1 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  3 |     2 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  4 |     2 | hello    | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  5 |     2 | dog      | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  6 |     2 | dog      | world  | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  7 |     2 | dog      | cat    | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  8 |     2 | dog      | cat    | 06-02-2016 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
-  9 |     2 | dog      | cat    | 01-01-2010 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 10 |     2 | dog      | cat    | 01-01-2010 | Thu Sep 01 12:00:00 2016 |                          | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 11 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 12 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 13 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 14 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 15 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 16 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 17 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 18 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | @ 1 day    | t                  | f
- 19 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | @ 1 day    | t                  | f
- 20 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | @ 1 day    | t                  | f
- 21 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 1 day    | t                  | f
- 22 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 1 day    | t                  | f
- 23 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 3 hours  | t                  | f
- 24 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 3 hours  | t                  | f
- 25 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
- 26 |     2 | dog      | cat    | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
- 27 |     2 |          |        |            |                          | Thu Sep 29 12:00:00 2016 |                          |         |           13 |                   |                   |          |            |                    | 
- 28 |     2 |          |        |            |                          | Thu Sep 29 12:00:00 2016 |                          |         |           13 |                   |                   |          |            |                    | 
+ pk | c_int | c_bpchar | c_text |   c_date   |     c_timestamp     |  c_timestamp_null   |         c_array          | c_small | c_small_null |       c_big       |       c_num       |  c_time  | c_interval | c_hugetext_origdef | c_hugetext_newdef 
+----+-------+----------+--------+------------+---------------------+---------------------+--------------------------+---------+--------------+-------------------+-------------------+----------+------------+--------------------+-------------------
+  1 |     1 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  2 |     1 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  3 |     2 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  4 |     2 | hello    | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  5 |     2 | dog      | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  6 |     2 | dog      | world  | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  7 |     2 | dog      | cat    | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  8 |     2 | dog      | cat    | 2016-06-02 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+  9 |     2 | dog      | cat    | 2010-01-01 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 10 |     2 | dog      | cat    | 2010-01-01 | 2016-09-01 12:00:00 |                     | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 11 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 12 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,the,real,world} |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 13 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 14 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |      -5 |              |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 15 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 16 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 |   180000000000018 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 17 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 18 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 |     1.00000000001 | 12:00:00 | 1 day      | t                  | f
+ 19 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | 1 day      | t                  | f
+ 20 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | 1 day      | t                  | f
+ 21 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 1 day      | t                  | f
+ 22 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 1 day      | t                  | f
+ 23 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 03:00:00   | t                  | f
+ 24 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | 03:00:00   | t                  | f
+ 25 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
+ 26 |     2 | dog      | cat    | 2010-01-01 | 1970-12-31 11:12:13 | 2016-09-29 12:00:00 | {This,is,no,fantasy}     |       9 |           13 | -9999999999999999 | 2.000000000000002 | 23:59:59 |            | f                  | t
+ 27 |     2 |          |        |            |                     | 2016-09-29 12:00:00 |                          |         |           13 |                   |                   |          |            |                    | 
+ 28 |     2 |          |        |            |                     | 2016-09-29 12:00:00 |                          |         |           13 |                   |                   |          |            |                    | 
 (28 rows)
 
 SELECT comp();
@@ -218,24 +218,24 @@
               ALTER COLUMN c_array     DROP DEFAULT;
 INSERT INTO T VALUES (15), (16);
 SELECT * FROM T;
- pk | c_int | c_bpchar |    c_text    |   c_date   |       c_timestamp        |            c_array            
-----+-------+----------+--------------+------------+--------------------------+-------------------------------
-  1 |     6 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  2 |     6 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  3 |     8 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  4 |     8 | abcd     | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  5 |     8 | abc      | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  6 |     8 | abc      | abcdef       | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  7 |     8 | abc      | abcdefghijkl | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  8 |     8 | abc      | abcdefghijkl | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
-  9 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
- 10 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world}
- 11 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,abcd,the,real,world}
- 12 |     8 | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,abcd,the,real,world}
- 13 |       | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,a,fantasy}
- 14 |       | abc      | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,a,fantasy}
- 15 |       |          |              |            |                          | 
- 16 |       |          |              |            |                          | 
+ pk | c_int | c_bpchar |    c_text    |   c_date   |     c_timestamp     |            c_array            
+----+-------+----------+--------------+------------+---------------------+-------------------------------
+  1 |     6 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  2 |     6 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  3 |     8 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  4 |     8 | abcd     | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  5 |     8 | abc      | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  6 |     8 | abc      | abcdef       | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  7 |     8 | abc      | abcdefghijkl | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  8 |     8 | abc      | abcdefghijkl | 2016-06-12 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+  9 |     8 | abc      | abcdefghijkl | 2009-12-28 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+ 10 |     8 | abc      | abcdefghijkl | 2009-12-28 | 2016-09-11 00:00:00 | {This,is,abcd,the,real,world}
+ 11 |     8 | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,abcd,the,real,world}
+ 12 |     8 | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,abcd,the,real,world}
+ 13 |       | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,a,fantasy}
+ 14 |       | abc      | abcdefghijkl | 2009-12-28 | 1971-01-30 00:00:00 | {This,is,a,fantasy}
+ 15 |       |          |              |            |                     | 
+ 16 |       |          |              |            |                     | 
 (16 rows)
 
 SELECT comp();
#48Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Jaime Casanova (#47)
1 attachment(s)
Re: Built-in connection pooler

On 06.09.2019 1:01, Jaime Casanova wrote:

On Thu, 15 Aug 2019 at 06:01, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

I have implemented one more trick reducing number of tainted backends:
now it is possible to use session variables in pooled backends.

How it works?
Proxy determines "SET var=" statements and converts them to "SET LOCAL
var=".
Also all such assignments are concatenated and stored in session context
at proxy.
Then proxy injects this statement inside each transaction block or
prepend to standalone statements.

This mechanism works only for GUCs set outside transaction.
By default it is switched off. To enable it you should switch on
"proxying_gucs" parameter.

there is definitively something odd here. i applied the patch and
changed these parameters

connection_proxies = '3'
session_pool_size = '33'
port = '5433'
proxy_port = '5432'

after this i run "make installcheck", the idea is to prove if an
application going through proxy will behave sanely. As far as i
understood in case the backend needs session mode it will taint the
backend otherwise it will act as transaction mode.

Sadly i got a lot of FAILED tests, i'm attaching the diffs on
regression with installcheck and installcheck-parallel.
btw, after make installcheck-parallel i wanted to do a new test but
wasn't able to drop regression database because there is still a
subscription, so i tried to drop it and got a core file (i was
connected trough the pool_worker), i'm attaching the backtrace of the
crash too.

Thank you very much for testing connection pooler.
The problem with "make installcheck" is caused by GUCs passed by
pg_regress inside startup packet:

    putenv("PGTZ=PST8PDT");
    putenv("PGDATESTYLE=Postgres, MDY");

Them are not currently handled by builtin proxy.
Just because I didn't find some acceptable solution for it.
With newly added proxying_gucs options it is possible, this problem is
solved, but it leads to other problem:
some Postgres statements are not transactional and can not be used
inside block.
As far as proxying_gucs appends gucs setting to the statement (and so
implicitly forms transaction block),
such statement cause errors. I added check to avoid prepending GUCs
settings to non-transactional statements.
But this check seems to be not so trivial. At least I failed to make it
work: it doesn;t correctly handle specifying default namespace.

"make installcheck" can be passed if you add the folowing three settings
to configuration file:

datestyle='Postgres, MDY'
timezone='PST8PDT'
intervalstyle='postgres_verbose'

Sorry, I failed to reproduce the crash.
So if you will be able to find out some scenario for reproduce it, I
will be very pleased to receive it.

I attached to this main new version of the patch. It includes
multitenancy support.
Before separate proxy instance is created for each <dbname,role> pair.
Postgres backend is not able to work with more than one database.
But it is possible to change current user (role) inside one connection.
If "multitenent_proxy" options is switched on, then separate proxy will
be create only for each database and current user is explicitly
specified for each transaction/standalone
statement using "set command" clause.
To support this mode you need to grant permissions to all roles to
switch between each other.

So basically multitenancy support uses the same mechanism as GUCs proxying.
I will continue work on improving GUCs proxying mechanism, so that it
can pass regression tests.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-20.patchtext/x-patch; name=builtin_connection_proxy-20.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..df0bcaf 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..8dc9594
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command<literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..07b866d
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1308 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						chan->is_interrupted = true;
+						if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& pg_strncasecmp(stmt, "set", 3) == 0
+							&& pg_strncasecmp(stmt+3, " local", 6) != 0)
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, stmt+3,
+												  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			else
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->peer == NULL || chan->peer->tx_size == 0) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..06cbae3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2176,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4645,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8251,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..8a31f4e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#49Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#48)
1 attachment(s)
Re: Built-in connection pooler

On 06.09.2019 19:41, Konstantin Knizhnik wrote:

On 06.09.2019 1:01, Jaime Casanova wrote:

Sadly i got a lot of FAILED tests, i'm attaching the diffs on
regression with installcheck and installcheck-parallel.
btw, after make installcheck-parallel i wanted to do a new test but
wasn't able to drop regression database because there is still a
subscription, so i tried to drop it and got a core file (i was
connected trough the pool_worker), i'm attaching the backtrace of the
crash too.

Sorry, I failed to reproduce the crash.
So if you will be able to find out some scenario for reproduce it, I
will be very pleased to receive it.

I was able to reproduce the crash.
Patch is attached. Also I added proxyign of RESET command.
Unfortunately it is still not enough to pass regression tests with
"proxying_gucs=on".
Mostly because error messages doesn't match after prepending "set local"
commands.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-21.patchtext/x-patch; name=builtin_connection_proxy-21.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..df0bcaf 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..8dc9594
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command<literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..3bf2f2f
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1359 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has it sown proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						chan->is_interrupted = true;
+						if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+	conn = LibpqConnectdbParams(keywords, values, error);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..06cbae3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2176,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4645,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8251,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..8a31f4e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#50Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#49)
1 attachment(s)
Re: Built-in connection pooler

On 09.09.2019 18:12, Konstantin Knizhnik wrote:

On 06.09.2019 19:41, Konstantin Knizhnik wrote:

On 06.09.2019 1:01, Jaime Casanova wrote:

Sadly i got a lot of FAILED tests, i'm attaching the diffs on
regression with installcheck and installcheck-parallel.
btw, after make installcheck-parallel i wanted to do a new test but
wasn't able to drop regression database because there is still a
subscription, so i tried to drop it and got a core file (i was
connected trough the pool_worker), i'm attaching the backtrace of the
crash too.

Sorry, I failed to reproduce the crash.
So if you will be able to find out some scenario for reproduce it, I
will be very pleased to receive it.

I was able to reproduce the crash.
Patch is attached. Also I added proxyign of RESET command.
Unfortunately it is still not enough to pass regression tests with
"proxying_gucs=on".
Mostly because error messages doesn't match after prepending "set
local" commands.

I have implemented passing startup options to pooler backend.
Now "make installcheck" is passed without  manual setting
datestyle/timezone/intervalstyle in postgresql.conf.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-22.patchtext/x-patch; name=builtin_connection_proxy-22.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..df0bcaf 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..8dc9594
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command<literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..eb8dcad
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1482 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == 0)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						chan->is_interrupted = true;
+						if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..06cbae3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2176,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4645,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8251,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..8a31f4e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#51Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Konstantin Knizhnik (#50)
Re: Built-in connection pooler

Travis complains that the SGML docs are broken. Please fix.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#52Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Alvaro Herrera (#51)
1 attachment(s)
Re: Built-in connection pooler

On 25.09.2019 23:14, Alvaro Herrera wrote:

Travis complains that the SGML docs are broken. Please fix.

Sorry.
Patch with fixed SGML formating error is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-23.patchtext/x-patch; name=builtin_connection_proxy-23.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..df0bcaf 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..c63ba26
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..b8723d8
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1485 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			bool handshake = false;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				int response_size = msg_start + msg_len;
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						chan->is_interrupted = true;
+						if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				if (chan->peer == NULL)	 /* client is not yet connected to backend */
+				{
+					if (!chan->client_port)
+					{
+						/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+						channel_hangout(chan, "idle");
+						return false;
+					}
+					client_attach(chan);
+					if (handshake) /* Send handshake response to the client */
+					{
+						/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+						Channel* backend = chan->peer;
+						Assert(chan->rx_pos == msg_len && msg_start == 0);
+						chan->rx_pos = 0; /* Skip startup packet */
+						if (backend != NULL) /* Backend was assigned */
+						{
+							Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+							Assert(backend->handshake_response_size < backend->buf_size);
+							memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+							backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+							backend->backend_is_ready = true;
+							return channel_write(chan, false);
+						}
+						else
+						{
+							/* Handshake response will be send to client later when backend is assigned */
+							return false;
+						}
+					}
+					else if (chan->peer == NULL) /* Backend was not assigned */
+					{
+						chan->tx_size = response_size; /* query will be send later once backend is assigned */
+						return false;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..06cbae3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2176,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4645,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8251,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..8a31f4e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#53Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#52)
1 attachment(s)
Re: Built-in connection pooler

New version of builtin connection pooler fixing handling messages of
extended protocol.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-24.patchtext/x-patch; name=builtin_connection_proxy-24.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..df0bcaf 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..c63ba26
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..6d9dfdc
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1506 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || !chan->backend_proc->is_tainted) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		Assert(!chan->backend_is_tainted);
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						chan->is_interrupted = true;
+						if (chan->peer == NULL || !chan->peer->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(LOG, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(LOG, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(LOG, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(LOG, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..06cbae3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2176,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by ont connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4645,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8251,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..8a31f4e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#54ideriha.takeshi@fujitsu.com
ideriha.takeshi@fujitsu.com
In reply to: Konstantin Knizhnik (#53)
RE: Built-in connection pooler

Hi.

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]

New version of builtin connection pooler fixing handling messages of extended
protocol.

Here are things I've noticed.

1. Is adding guc to postgresql.conf.sample useful for users?

2. When proxy_port is a bit large (perhaps more than 2^15), connection failed
though regular "port" is fine with number more than 2^15.

$ bin/psql -p 32768
2019-11-12 16:11:25.460 JST [5617] LOG: Message size 84
2019-11-12 16:11:25.461 JST [5617] WARNING: could not setup local connect to server
2019-11-12 16:11:25.461 JST [5617] DETAIL: invalid port number: "-22768"
2019-11-12 16:11:25.461 JST [5617] LOG: Handshake response will be sent to the client later when backed is assigned
psql: error: could not connect to server: invalid port number: "-22768"

3. When porxy_port is 6543 and connection_proxies is 2, running "make installcheck" twice without restarting server failed.
This is because of remaining backend.

============== dropping database "regression" ==============
ERROR: database "regression" is being accessed by other users
DETAIL: There is 1 other session using the database.
command failed: "/usr/local/pgsql-connection-proxy-performance/bin/psql" -X -c "DROP DATABASE IF EXISTS \"regression\"" "postgres"

4. When running "make installcheck-world" with various connection-proxies, it results in a different number of errors.
With connection_proxies = 2, the test never ends. With connection_proxies = 20, 23 tests failed.
More connection_proxies, the number of failed tests decreased.

Regards,
Takeshi Ideriha

#55Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: ideriha.takeshi@fujitsu.com (#54)
1 attachment(s)
Re: Built-in connection pooler

Hi

On 12.11.2019 10:50, ideriha.takeshi@fujitsu.com wrote:

Hi.

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]

New version of builtin connection pooler fixing handling messages of extended
protocol.

Here are things I've noticed.

1. Is adding guc to postgresql.conf.sample useful for users?

Good catch: I will add it.

2. When proxy_port is a bit large (perhaps more than 2^15), connection failed
though regular "port" is fine with number more than 2^15.

$ bin/psql -p 32768
2019-11-12 16:11:25.460 JST [5617] LOG: Message size 84
2019-11-12 16:11:25.461 JST [5617] WARNING: could not setup local connect to server
2019-11-12 16:11:25.461 JST [5617] DETAIL: invalid port number: "-22768"
2019-11-12 16:11:25.461 JST [5617] LOG: Handshake response will be sent to the client later when backed is assigned
psql: error: could not connect to server: invalid port number: "-22768"

Hmmm, ProxyPortNumber is used exactly in the same way as PostPortNumber.
I was able to connect to the specified port:

knizhnik@knizhnik:~/dtm-data$ psql postgres -p 42768
psql (13devel)
Type "help" for help.

postgres=# \q
knizhnik@knizhnik:~/dtm-data$ psql postgres -h 127.0.0.1 -p 42768
psql (13devel)
Type "help" for help.

postgres=# \q

3. When porxy_port is 6543 and connection_proxies is 2, running "make installcheck" twice without restarting server failed.
This is because of remaining backend.

============== dropping database "regression" ==============
ERROR: database "regression" is being accessed by other users
DETAIL: There is 1 other session using the database.
command failed: "/usr/local/pgsql-connection-proxy-performance/bin/psql" -X -c "DROP DATABASE IF EXISTS \"regression\"" "postgres"

Yes, this is known limitation.
Frankly speaking I do not consider it as a problem: it is not possible
to drop database while there are active sessions accessing it.
And definitely proxy has such sessions. You can specify
idle_pool_worker_timeout to shutdown pooler workers after  some idle time.
In this case, if you make large enough pause between test iterations,
then workers will be terminated and it will be possible to drop database.

4. When running "make installcheck-world" with various connection-proxies, it results in a different number of errors.
With connection_proxies = 2, the test never ends. With connection_proxies = 20, 23 tests failed.
More connection_proxies, the number of failed tests decreased.

Please notice, that each proxy maintains its own connection pool.
Default number of pooled backends is 10 (session_pool_size).
If you specify too large number of proxies then number of spawned backends =
 session_pool_size * connection_proxies can be too large (for the
specified number of max_connections).

Please notice the difference between number of proxies and number of
pooler backends.
Usually one proxy process is enough to serve all workers. Only in case
of MPP systems with large number of cores
and especially with SSL connections, proxy can become a bottleneck. In
this case you can configure several proxies.
But having more than 1-4 proxies seems to be bad idea.

But in case of check-world the problem is not related with number of
proxies.
It takes place even with connection_proxies = 1
There was one bug with handling clients terminated inside transaction.
It is fixed in the attached patch.
But there is still problem with passing isolation tests under connection
proxy: them are using pg_isolation_test_session_is_blocked
function which checks if backends with specified PIDs are blocked. But
as far as in case of using connection proxy session is no more bounded
to the particular backend, this check may not work as expected and test
is blocked. I do not know how it can be fixed and not sure if it has to
be fixed at all.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-25.patchtext/x-patch; name=builtin_connection_proxy-25.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..df0bcaf 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -719,6 +719,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..c63ba26
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ee6e2bd 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..7d60c9b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0960b33..ac51dc4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fb2be10..b0af84b 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -591,6 +591,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 384887e..ebff20a 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index f4120be..e0cdd9e 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..9622ee7 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,6 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3339804..739b8fd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1008,6 +1072,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1031,32 +1100,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1125,29 +1198,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1157,6 +1233,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1374,6 +1464,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1611,6 +1703,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1701,8 +1844,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1899,8 +2052,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1967,6 +2118,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2067,7 +2230,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2739,6 +2902,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2816,6 +2981,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4041,6 +4209,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4050,8 +4219,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4155,6 +4324,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4851,6 +5022,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -4991,6 +5163,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5526,6 +5711,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6116,6 +6369,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6347,6 +6604,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..618a891
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_itoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index d7d7335..6d32d8f 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(int port)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(int port)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 1b7053c..b7c1ed7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 498373f..3e530e7 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -397,6 +397,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..e07f540 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4237,6 +4237,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..7e91742 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -457,6 +457,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1286,6 +1294,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2138,6 +2176,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2185,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4550,6 +4645,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8146,6 +8251,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index cfad86c..73f0902 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -744,6 +744,19 @@
 #include_if_exists = ''			# include file only if it exists
 #include = ''				# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..812c469 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10704,4 +10704,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 61a24c2..8a31f4e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index b5c03d9..3ea24a3 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index f4841fb..fbc31d6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -445,6 +445,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -455,6 +456,7 @@ int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptf
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ac7ee72..e7207e2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcf2bc2..7f2a1df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index f274d80..fdf53e9 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -19,6 +19,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 3dea11e..39bd2de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -17,6 +17,7 @@ CFLAGS_SL =
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index d1d0aed..a677577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -158,6 +158,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -271,6 +272,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#56ideriha.takeshi@fujitsu.com
ideriha.takeshi@fujitsu.com
In reply to: Konstantin Knizhnik (#55)
RE: Built-in connection pooler

Hi

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]

New version of builtin connection pooler fixing handling messages of
extended protocol.

2. When proxy_port is a bit large (perhaps more than 2^15), connection
failed though regular "port" is fine with number more than 2^15.

$ bin/psql -p 32768
2019-11-12 16:11:25.460 JST [5617] LOG: Message size 84
2019-11-12 16:11:25.461 JST [5617] WARNING: could not setup local
connect to server
2019-11-12 16:11:25.461 JST [5617] DETAIL: invalid port number: "-22768"
2019-11-12 16:11:25.461 JST [5617] LOG: Handshake response will be
sent to the client later when backed is assigned
psql: error: could not connect to server: invalid port number: "-22768"

Hmmm, ProxyPortNumber is used exactly in the same way as PostPortNumber.
I was able to connect to the specified port:

knizhnik@knizhnik:~/dtm-data$ psql postgres -p 42768 psql (13devel) Type "help" for
help.

postgres=# \q
knizhnik@knizhnik:~/dtm-data$ psql postgres -h 127.0.0.1 -p 42768 psql (13devel)
Type "help" for help.

postgres=# \q

For now I replay for the above. Oh sorry, I was wrong about the condition.
The error occurred under following condition.
- port = 32768
- proxy_port = 6543
- $ psql postgres -p 6543

$ bin/pg_ctl start -D data
waiting for server to start....
LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
LOG: listening on IPv6 address "::1", port 6543
LOG: listening on IPv4 address "127.0.0.1", port 6543
LOG: listening on IPv6 address "::1", port 32768
LOG: listening on IPv4 address "127.0.0.1", port 32768
LOG: listening on Unix socket "/tmp/.s.PGSQL.6543"
LOG: listening on Unix socket "/tmp/.s.PGSQL.32768"
LOG: Start proxy process 25374
LOG: Start proxy process 25375
LOG: database system was shut down at 2019-11-12 16:49:20 JST
LOG: database system is ready to accept connections

server started
[postgres@vm-7kfq-coreban connection-pooling]$ psql -p 6543
LOG: Message size 84
WARNING: could not setup local connect to server
DETAIL: invalid port number: "-32768"
LOG: Handshake response will be sent to the client later when backed is assigned
psql: error: could not connect to server: invalid port number: "-32768"

By the way, the patch has some small conflicts against master.

Regards,
Takeshi Ideriha

#57Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: ideriha.takeshi@fujitsu.com (#56)
1 attachment(s)
Re: Built-in connection pooler

For now I replay for the above. Oh sorry, I was wrong about the condition.
The error occurred under following condition.
- port = 32768
- proxy_port = 6543
- $ psql postgres -p 6543

$ bin/pg_ctl start -D data
waiting for server to start....
LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
LOG: listening on IPv6 address "::1", port 6543
LOG: listening on IPv4 address "127.0.0.1", port 6543
LOG: listening on IPv6 address "::1", port 32768
LOG: listening on IPv4 address "127.0.0.1", port 32768
LOG: listening on Unix socket "/tmp/.s.PGSQL.6543"
LOG: listening on Unix socket "/tmp/.s.PGSQL.32768"
LOG: Start proxy process 25374
LOG: Start proxy process 25375
LOG: database system was shut down at 2019-11-12 16:49:20 JST
LOG: database system is ready to accept connections

server started
[postgres@vm-7kfq-coreban connection-pooling]$ psql -p 6543
LOG: Message size 84
WARNING: could not setup local connect to server
DETAIL: invalid port number: "-32768"
LOG: Handshake response will be sent to the client later when backed is assigned
psql: error: could not connect to server: invalid port number: "-32768"

By the way, the patch has some small conflicts against master.

Thank you very much for reporting the problem.
It was caused by using pg_itoa for string representation of port (I
could not imagine that unlike standard itoa it accepts int16 parameter
instead of int).
Attached please find rebased patch with this bug fixed.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-26.patchtext/x-patch; name=builtin_connection_proxy-26.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index adf0490..5c2095f 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
 
@@ -93,6 +94,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -284,6 +287,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f837703..7433e6f 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -732,6 +732,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..c63ba26
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index e59cba7..1e5aa4f 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 83f9959..cf7d1dd 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -57,6 +58,8 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 7e0a041..1fbfe6b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -457,6 +458,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a13322b..c5a1abe 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 45aae59..968f70f 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -590,6 +590,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index cd517e8..c0615fd 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -195,15 +195,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -220,6 +218,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -227,6 +230,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -329,7 +333,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, char *hostName, unsigned short portNumber,
 				 char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -593,6 +597,7 @@ StreamServerPort(int family, char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index 2d00b4f..8c763c7 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -25,7 +25,8 @@ OBJS = \
 	$(TAS) \
 	atomics.o \
 	pg_sema.o \
-	pg_shmem.o
+	pg_shmem.o \
+	send_sock.o
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..6ea4f35
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,165 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_LEN(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index d5b5e77..1564c8c 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 03e3d36..0726adb 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -23,6 +23,7 @@ OBJS = \
 	postmaster.o \
 	startup.o \
 	syslogger.o \
-	walwriter.o
+	walwriter.o \
+	proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 9ff2832..edfe12f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1123,6 +1187,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1146,32 +1215,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1240,29 +1313,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1272,6 +1348,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1397,6 +1487,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1634,6 +1726,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1724,8 +1867,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1922,8 +2075,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1990,6 +2141,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2090,7 +2253,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2781,6 +2944,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2858,6 +3023,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4108,6 +4276,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4117,8 +4286,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4222,6 +4391,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4918,6 +5089,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -5058,6 +5230,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5601,6 +5786,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6205,6 +6458,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6436,6 +6693,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
+
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
 }
 
 
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..0368c71
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_ltoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy", "", "", "");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 4829953..bb6df49 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/origin.h"
 #include "replication/slot.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(void)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(void)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 2426cbc..287fb19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -72,11 +72,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -84,6 +102,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -137,9 +157,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -553,6 +573,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -571,20 +592,21 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -632,12 +654,11 @@ FreeWaitEventSet(WaitEventSet *set)
 #if defined(WAIT_USE_EPOLL)
 	close(set->epoll_fd);
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -650,7 +671,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -691,9 +712,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -720,8 +743,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -748,15 +783,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -767,10 +828,16 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -804,9 +871,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -844,6 +911,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -852,11 +921,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -864,11 +932,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -897,9 +970,21 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1200,11 +1285,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1227,15 +1313,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1326,17 +1410,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1402,7 +1494,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1443,7 +1535,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 9089733..deba84d 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -774,7 +774,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index fff0628..b080193 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -396,6 +396,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index d7a72c0..34a4c75 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4245,6 +4245,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index bc62c6e..6f1bb75 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 3bf96de..6036703 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 4b3769b..e03d5a0 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -459,6 +459,14 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -1291,6 +1299,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2143,6 +2181,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2190,6 +2275,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4577,6 +4672,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8170,6 +8275,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index be02a76..ba3300d 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -753,6 +753,19 @@
 #include_if_exists = '...'		# include file only if it exists
 #include = '...'			# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 58ea5b9..9e9c35d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10731,4 +10731,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 541f970..d739dc3 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 08a2576..1e12ee1 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, char *hostName,
-							 unsigned short portNumber, char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, char *hostName,
+							unsigned short portNumber, char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index bc6e03f..92f6f76 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 10dcb5f..fd33239 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index c459a24..2a541e6 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -436,6 +436,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -446,6 +447,7 @@ int			pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *except
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index b692d8b..d301f8c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index bd7af11..1dfac95 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 281e1db..76d2b8a 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index d68976f..9ff45b1 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 64468ab..5ae5137 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 81089d6..fed76be 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -18,6 +18,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index 8a7d6ff..c191fa9 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -16,6 +16,7 @@ DLSUFFIX = .dll
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index a24cfd4..38dda4d 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 9a0963a..3e5d0e6 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -160,6 +160,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -273,6 +274,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index d034ec5..ef6eb81 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#58David Steele
david@pgmasters.net
In reply to: Konstantin Knizhnik (#57)
Re: Built-in connection pooler

Hi Konstantin,

On 11/14/19 2:06 AM, Konstantin Knizhnik wrote:

Attached please find rebased patch with this bug fixed.

This patch no longer applies: http://cfbot.cputube.org/patch_27_2067.log

CF entry has been updated to Waiting on Author.

Regards,
--
-David
david@pgmasters.net

#59Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: David Steele (#58)
1 attachment(s)
Re: Built-in connection pooler

Hi David,

On 24.03.2020 16:26, David Steele wrote:

Hi Konstantin,

On 11/14/19 2:06 AM, Konstantin Knizhnik wrote:

Attached please find rebased patch with this bug fixed.

This patch no longer applies: http://cfbot.cputube.org/patch_27_2067.log

CF entry has been updated to Waiting on Author.

Rebased version of the patch is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-27.patchtext/x-patch; name=builtin_connection_proxy-27.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index 6fbfef2..27aa6cb 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
@@ -94,6 +95,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -286,6 +289,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 355b408..23210ba 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -732,6 +732,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..c63ba26
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..b82637e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index e59cba7..1e5aa4f 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -158,6 +158,7 @@
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 40be506..9bd5dad 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -58,6 +59,8 @@ PerformCursorOpen(ParseState *pstate, DeclareCursorStmt *cstmt, ParamListInfo pa
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 284a5bf..a37654f 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -441,6 +442,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 6aab73b..a80c85a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 8e35c5b..b178c34 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -615,6 +615,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 7717bb2..3ec8849 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -193,15 +193,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -218,6 +216,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -225,6 +228,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -327,7 +331,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 				 const char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -591,6 +595,7 @@ StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index 2d00b4f..8c763c7 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -25,7 +25,8 @@ OBJS = \
 	$(TAS) \
 	atomics.o \
 	pg_sema.o \
-	pg_shmem.o
+	pg_shmem.o \
+	send_sock.o
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..0a90a50
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_SPACE(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index 4843507..937d4a4 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index bfdf6a8..11dd9c8 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -24,6 +24,7 @@ OBJS = \
 	postmaster.o \
 	startup.o \
 	syslogger.o \
-	walwriter.o
+	walwriter.o \
+	proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 2b9ab32..8345a28 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool secure_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1123,6 +1187,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1146,32 +1215,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1240,29 +1313,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1272,6 +1348,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1397,6 +1487,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1634,6 +1726,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1724,8 +1867,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1922,8 +2075,6 @@ ProcessStartupPacket(Port *port, bool secure_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1990,6 +2141,18 @@ ProcessStartupPacket(Port *port, bool secure_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, secure_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool secure_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2090,7 +2253,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2794,6 +2957,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2871,6 +3036,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4121,6 +4289,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4130,8 +4299,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4235,6 +4404,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4936,6 +5107,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -5076,6 +5248,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5619,6 +5804,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6223,6 +6476,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6455,6 +6712,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
 
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
+
 	/*
 	 * We need to restore fd.c's counts of externally-opened FDs; to avoid
 	 * confusion, be sure to do this after restoring max_safe_fds.  (Note:
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..dc21479
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_ltoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 427b0d5..5259c24 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/origin.h"
 #include "replication/slot.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(void)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(void)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e2f4b11..9e0b0f6 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -78,11 +78,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -90,6 +108,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -150,9 +170,9 @@ static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action
 #elif defined(WAIT_USE_KQUEUE)
 static void WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -574,6 +594,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -594,23 +615,23 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_KQUEUE)
 	set->kqueue_ret_events = (struct kevent *) data;
-	data += MAXALIGN(sizeof(struct kevent) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 	if (!AcquireExternalFD())
@@ -702,12 +723,11 @@ FreeWaitEventSet(WaitEventSet *set)
 	close(set->kqueue_fd);
 	ReleaseExternalFD();
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -720,7 +740,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -761,9 +781,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -790,8 +812,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -820,15 +854,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, 0);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -842,13 +902,19 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 	int			old_events;
 #endif
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 #if defined(WAIT_USE_KQUEUE)
 	old_events = event->events;
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -884,9 +950,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, old_events);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -924,6 +990,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -932,11 +1000,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -944,11 +1011,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -1111,9 +1183,21 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1551,11 +1635,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1578,15 +1663,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1677,17 +1760,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1753,7 +1844,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1794,7 +1885,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 3013ef6..67e03b9 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -801,7 +801,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 9938cdd..64bba49 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -396,6 +396,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 00c77b6..225ad64 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4263,6 +4263,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index ecb1bf9..b007c13 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index eb19644..6c0cc24 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index af876d1..6e0112e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -489,6 +489,13 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 StaticAssertDecl(lengthof(ssl_protocol_versions_info) == (PG_TLS1_3_VERSION + 2),
 				 "array length mismatch");
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -685,6 +692,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Builtin connection pool"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1373,6 +1382,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2243,6 +2282,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2290,6 +2376,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4708,6 +4804,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8281,6 +8387,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index aa44f0c..4aaaff9 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -757,6 +757,19 @@
 #include_if_exists = '...'		# include file only if it exists
 #include = '...'			# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87d25d4..74fc5d0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10819,4 +10819,11 @@
   proname => 'pg_partition_root', prorettype => 'regclass',
   proargtypes => 'regclass', prosrc => 'pg_partition_root' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 82e57af..014a302 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 4d3a0be..98dff24 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, const char *hostName,
-							 unsigned short portNumber, const char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, const char *hostName,
+							unsigned short portNumber, const char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 14fa127..3c25266 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 29f3e39..535b88c 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index 8b6576b..1e0fec7 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -436,6 +436,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -446,6 +447,7 @@ int			pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *except
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index babc87d..680ddf6 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool SSLdone);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 46ae56c..a8f2d31 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index d217801..1856547 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -203,6 +203,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 454c2df..88dbea5 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 72931e6..6a22a21 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 81089d6..fed76be 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -18,6 +18,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index e72cb2d..183c8de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -16,6 +16,7 @@ DLSUFFIX = .dll
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index 1a31640..2caf6bb 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 636428b..0b5f7d1 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -163,6 +163,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -274,6 +275,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index 672bb2d..f60d4ba 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#60Daniel Gustafsson
daniel@yesql.se
In reply to: Konstantin Knizhnik (#59)
Re: Built-in connection pooler

On 24 Mar 2020, at 17:24, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

Rebased version of the patch is attached.

And this patch also fails to apply now, can you please submit a new version?
Marking the entry as Waiting on Author in the meantime.

cheers ./daniel

#61Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Daniel Gustafsson (#60)
1 attachment(s)
Re: Built-in connection pooler

On 01.07.2020 12:30, Daniel Gustafsson wrote:

On 24 Mar 2020, at 17:24, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
Rebased version of the patch is attached.

And this patch also fails to apply now, can you please submit a new version?
Marking the entry as Waiting on Author in the meantime.

cheers ./daniel

Rebased version of the patch is attached.

Attachments:

builtin_connection_proxy-28.patchtext/x-patch; charset=UTF-8; name=builtin_connection_proxy-28.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index 6fbfef2b12..27aa6cba8e 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
@@ -94,6 +95,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -286,6 +289,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index b81aab239f..aa435d4066 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -732,6 +732,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..c63ba2626e
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 64b5da0070..c48f585491 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index c41ce9499b..a8b0c40c6f 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -165,6 +165,7 @@ break is not needed in a wider output rendering.
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index e4b7483e32..3a24fed96e 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -58,6 +59,8 @@ PerformCursorOpen(ParseState *pstate, DeclareCursorStmt *cstmt, ParamListInfo pa
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 80d6df8ac1..3bd13e3240 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -441,6 +442,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 6aab73bfd4..a80c85ac2b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index f79044f39f..84bfd26804 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -617,6 +617,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 7717bb2719..3ec8849a84 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -193,15 +193,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -218,6 +216,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -225,6 +228,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -327,7 +331,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 				 const char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -591,6 +595,7 @@ StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index 2d00b4f05a..8c763c719d 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -25,7 +25,8 @@ OBJS = \
 	$(TAS) \
 	atomics.o \
 	pg_sema.o \
-	pg_shmem.o
+	pg_shmem.o \
+	send_sock.o
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..0a90a50fd4
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_SPACE(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index 6fbd1ed6fb..b59cc26e16 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index bfdf6a833d..11dd9c8733 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -24,6 +24,7 @@ OBJS = \
 	postmaster.o \
 	startup.o \
 	syslogger.o \
-	walwriter.o
+	walwriter.o \
+	proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..d950a8c281
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index b4d475bb0b..0925b31052 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1123,6 +1187,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1146,32 +1215,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1240,29 +1313,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1272,6 +1348,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1397,6 +1487,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1634,6 +1726,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1724,8 +1867,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1926,8 +2079,6 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1994,6 +2145,18 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, ssl_done, gss_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool ssl_done, bool gss_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2103,7 +2266,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2807,6 +2970,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2884,6 +3049,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4134,6 +4302,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4143,8 +4312,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4248,6 +4417,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4944,6 +5115,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -5084,6 +5256,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5626,6 +5811,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6231,6 +6484,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6463,6 +6720,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
 
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
+
 	/*
 	 * We need to restore fd.c's counts of externally-opened FDs; to avoid
 	 * confusion, be sure to do this after restoring max_safe_fds.  (Note:
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..dc214790c6
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_ltoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 427b0d59cd..5259c243bc 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/origin.h"
 #include "replication/slot.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(void)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(void)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 91fa4b619b..c6f2d85879 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -78,11 +78,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -90,6 +108,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -150,9 +170,9 @@ static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action
 #elif defined(WAIT_USE_KQUEUE)
 static void WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -574,6 +594,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -594,23 +615,23 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_KQUEUE)
 	set->kqueue_ret_events = (struct kevent *) data;
-	data += MAXALIGN(sizeof(struct kevent) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 	if (!AcquireExternalFD())
@@ -702,12 +723,11 @@ FreeWaitEventSet(WaitEventSet *set)
 	close(set->kqueue_fd);
 	ReleaseExternalFD();
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -720,7 +740,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -761,9 +781,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -790,8 +812,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -820,14 +854,40 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, 0);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -842,13 +902,19 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 	int			old_events;
 #endif
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 #if defined(WAIT_USE_KQUEUE)
 	old_events = event->events;
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -884,9 +950,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, old_events);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -924,6 +990,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -932,11 +1000,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -944,11 +1011,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -1111,9 +1183,21 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1551,11 +1635,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1578,15 +1663,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1677,17 +1760,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1753,7 +1844,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1794,7 +1885,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 95989ce79b..a7289026b6 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -813,7 +813,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index e57fcd2538..f4cff52588 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -396,6 +396,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyProc->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index c9424f167c..c046525beb 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4283,6 +4283,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index e992d1bbfc..c640ffacab 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index eb19644419..6c0cc24625 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 75fc6f11d6..0246bc89fd 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -481,6 +481,13 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 StaticAssertDecl(lengthof(ssl_protocol_versions_info) == (PG_TLS1_3_VERSION + 2),
 				 "array length mismatch");
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -678,6 +685,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Builtin connection pool"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1364,6 +1373,36 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2225,6 +2264,53 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2272,6 +2358,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4755,6 +4851,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8328,6 +8434,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 3a25287a39..d3149f2734 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -770,6 +770,19 @@
 #include_if_exists = '...'		# include file only if it exists
 #include = '...'			# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 38295aca48..b7f1b1371f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10948,4 +10948,11 @@
   proname => 'is_normalized', prorettype => 'bool', proargtypes => 'text text',
   prosrc => 'unicode_is_normalized' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 179ebaa104..6a8195dc53 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index b1152475ac..4e0f22300b 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, const char *hostName,
-							 unsigned short portNumber, const char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, const char *hostName,
+							unsigned short portNumber, const char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 18bc8a7b90..6ef450be0e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 271ff0d00b..9ba7f7faf9 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index 8b6576b23d..1e0fec75be 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -436,6 +436,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -446,6 +447,7 @@ int			pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *except
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index babc87dfc9..edf587104f 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool ssl_done, bool gss_done);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..254d0f099e
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 46ae56cae3..a8f2d3194b 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index b20e2ad4f6..530bf8d96c 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -210,6 +210,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 454c2df487..88dbea510d 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1de91ae295..aec3306aec 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 81089d6257..fed76be9e0 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -18,6 +18,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index e72cb2db0e..183c8de2ce 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -16,6 +16,7 @@ DLSUFFIX = .dll
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index c830627b00..7f14dcd51c 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000000..ebaa257f4b
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 20da7985c1..33a3e6b037 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -162,6 +162,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -273,6 +274,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index 672bb2d650..f60d4ba985 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#62Daniel Gustafsson
daniel@yesql.se
In reply to: Konstantin Knizhnik (#61)
Re: Built-in connection pooler

On 2 Jul 2020, at 13:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
On 01.07.2020 12:30, Daniel Gustafsson wrote:

On 24 Mar 2020, at 17:24, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
Rebased version of the patch is attached.

And this patch also fails to apply now, can you please submit a new version?
Marking the entry as Waiting on Author in the meantime.

cheers ./daniel

Rebased version of the patch is attached.

Both Travis and Appveyor fails to compile this version:

proxy.c: In function ‘client_connect’:
proxy.c:302:6: error: too few arguments to function ‘ParseStartupPacket’
if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
^
In file included from proxy.c:8:0:
../../../src/include/postmaster/postmaster.h:71:12: note: declared here
extern int ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool ssl_done, bool gss_done);
^
<builtin>: recipe for target 'proxy.o' failed
make[3]: *** [proxy.o] Error 1

cheers ./daniel

#63Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Daniel Gustafsson (#62)
1 attachment(s)
Re: Built-in connection pooler

On 02.07.2020 17:44, Daniel Gustafsson wrote:

On 2 Jul 2020, at 13:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
On 01.07.2020 12:30, Daniel Gustafsson wrote:

On 24 Mar 2020, at 17:24, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
Rebased version of the patch is attached.

And this patch also fails to apply now, can you please submit a new version?
Marking the entry as Waiting on Author in the meantime.

cheers ./daniel

Rebased version of the patch is attached.

Both Travis and Appveyor fails to compile this version:

proxy.c: In function ‘client_connect’:
proxy.c:302:6: error: too few arguments to function ‘ParseStartupPacket’
if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false) != STATUS_OK) /* skip packet size */
^
In file included from proxy.c:8:0:
../../../src/include/postmaster/postmaster.h:71:12: note: declared here
extern int ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool ssl_done, bool gss_done);
^
<builtin>: recipe for target 'proxy.o' failed
make[3]: *** [proxy.o] Error 1

cheers ./daniel

Sorry, correct patch is attached.

Attachments:

builtin_connection_proxy-29.patchtext/x-patch; charset=UTF-8; name=builtin_connection_proxy-29.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index 6fbfef2b12..27aa6cba8e 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
@@ -94,6 +95,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -286,6 +289,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index b81aab239f..aa435d4066 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -732,6 +732,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..c63ba2626e
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 64b5da0070..c48f585491 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index c41ce9499b..a8b0c40c6f 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -165,6 +165,7 @@ break is not needed in a wider output rendering.
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd9588a..196ca8c0f0 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index e4b7483e32..3a24fed96e 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -58,6 +59,8 @@ PerformCursorOpen(ParseState *pstate, DeclareCursorStmt *cstmt, ParamListInfo pa
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 80d6df8ac1..3bd13e3240 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -441,6 +442,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 6aab73bfd4..a80c85ac2b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behaviour with connectoin pooler.
+	 * Unfortunately makring backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), althougth it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceeded by nextval().
+	 * To make regression tests passed, backend is also marker ias tainted when it creates
+	 * sequence. Certainly it is just temoporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index f79044f39f..84bfd26804 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -617,6 +617,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 7717bb2719..3ec8849a84 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -193,15 +193,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -218,6 +216,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -225,6 +228,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -327,7 +331,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 				 const char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -591,6 +595,7 @@ StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index 2d00b4f05a..8c763c719d 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -25,7 +25,8 @@ OBJS = \
 	$(TAS) \
 	atomics.o \
 	pg_sema.o \
-	pg_shmem.o
+	pg_shmem.o \
+	send_sock.o
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..0a90a50fd4
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_SPACE(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index 6fbd1ed6fb..b59cc26e16 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index bfdf6a833d..11dd9c8733 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -24,6 +24,7 @@ OBJS = \
 	postmaster.o \
 	startup.o \
 	syslogger.o \
-	walwriter.o
+	walwriter.o \
+	proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..d950a8c281
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index b4d475bb0b..0925b31052 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/fork_process.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -196,6 +197,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -216,6 +220,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -246,6 +251,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -403,7 +420,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -425,6 +441,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -477,6 +494,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -560,6 +579,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -572,6 +633,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1123,6 +1187,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1146,32 +1215,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1240,29 +1313,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1272,6 +1348,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1397,6 +1487,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1634,6 +1726,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1724,8 +1867,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1926,8 +2079,6 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -1994,6 +2145,18 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, ssl_done, gss_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool ssl_done, bool gss_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2103,7 +2266,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2807,6 +2970,8 @@ pmdie(SIGNAL_ARGS)
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
 
+				StopConnectionProxies(SIGTERM);
+
 				/*
 				 * If we're in recovery, we can't kill the startup process
 				 * right away, because at present doing so does not release
@@ -2884,6 +3049,9 @@ pmdie(SIGNAL_ARGS)
 				/* and the walwriter too */
 				if (WalWriterPID != 0)
 					signal_child(WalWriterPID, SIGTERM);
+
+				StopConnectionProxies(SIGTERM);
+
 				pmState = PM_WAIT_BACKENDS;
 			}
 
@@ -4134,6 +4302,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4143,8 +4312,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4248,6 +4417,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4944,6 +5115,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -5084,6 +5256,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5626,6 +5811,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6231,6 +6484,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6463,6 +6720,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
 
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
+
 	/*
 	 * We need to restore fd.c's counts of externally-opened FDs; to avoid
 	 * confusion, be sure to do this after restoring max_safe_fds.  (Note:
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..9df2fc4a0b
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_ltoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 427b0d59cd..5259c243bc 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/origin.h"
 #include "replication/slot.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(void)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -255,6 +257,7 @@ CreateSharedMemoryAndSemaphores(void)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 91fa4b619b..c6f2d85879 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -78,11 +78,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -90,6 +108,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -150,9 +170,9 @@ static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action
 #elif defined(WAIT_USE_KQUEUE)
 static void WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -574,6 +594,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -594,23 +615,23 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_KQUEUE)
 	set->kqueue_ret_events = (struct kevent *) data;
-	data += MAXALIGN(sizeof(struct kevent) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 	if (!AcquireExternalFD())
@@ -702,12 +723,11 @@ FreeWaitEventSet(WaitEventSet *set)
 	close(set->kqueue_fd);
 	ReleaseExternalFD();
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -720,7 +740,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -761,9 +781,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -790,8 +812,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -820,14 +854,40 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, 0);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
@@ -842,13 +902,19 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 	int			old_events;
 #endif
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 #if defined(WAIT_USE_KQUEUE)
 	old_events = event->events;
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -884,9 +950,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, old_events);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -924,6 +990,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -932,11 +1000,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -944,11 +1011,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -1111,9 +1183,21 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1551,11 +1635,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1578,15 +1663,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1677,17 +1760,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1753,7 +1844,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1794,7 +1885,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 95989ce79b..a7289026b6 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -813,7 +813,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index e57fcd2538..f4cff52588 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -396,6 +396,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyProc->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index c9424f167c..c046525beb 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4283,6 +4283,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index e992d1bbfc..c640ffacab 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index eb19644419..6c0cc24625 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,9 +130,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -148,3 +154,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 75fc6f11d6..0246bc89fd 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -481,6 +481,13 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 StaticAssertDecl(lengthof(ssl_protocol_versions_info) == (PG_TLS1_3_VERSION + 2),
 				 "array length mismatch");
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -678,6 +685,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Builtin connection pool"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1364,6 +1373,36 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2225,6 +2264,53 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2272,6 +2358,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4755,6 +4851,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8328,6 +8434,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 3a25287a39..d3149f2734 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -770,6 +770,19 @@
 #include_if_exists = '...'		# include file only if it exists
 #include = '...'			# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 38295aca48..b7f1b1371f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10948,4 +10948,11 @@
   proname => 'is_normalized', prorettype => 'bool', proargtypes => 'text text',
   prosrc => 'unicode_is_normalized' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 179ebaa104..6a8195dc53 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index b1152475ac..4e0f22300b 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, const char *hostName,
-							 unsigned short portNumber, const char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, const char *hostName,
+							unsigned short portNumber, const char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 18bc8a7b90..6ef450be0e 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 271ff0d00b..9ba7f7faf9 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index 8b6576b23d..1e0fec75be 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -436,6 +436,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -446,6 +447,7 @@ int			pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *except
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index babc87dfc9..edf587104f 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool ssl_done, bool gss_done);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..254d0f099e
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 46ae56cae3..a8f2d3194b 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -177,6 +182,8 @@ extern int	WaitLatch(Latch *latch, int wakeEvents, long timeout,
 extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index b20e2ad4f6..530bf8d96c 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -210,6 +210,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 454c2df487..88dbea510d 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1de91ae295..aec3306aec 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 81089d6257..fed76be9e0 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -18,6 +18,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index e72cb2db0e..183c8de2ce 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -16,6 +16,7 @@ DLSUFFIX = .dll
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index c830627b00..7f14dcd51c 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000000..ebaa257f4b
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 20da7985c1..33a3e6b037 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -162,6 +162,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -273,6 +274,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index 672bb2d650..f60d4ba985 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#64Jaime Casanova
jaime.casanova@2ndquadrant.com
In reply to: Konstantin Knizhnik (#41)
Re: Built-in connection pooler

On Wed, 7 Aug 2019 at 02:49, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Hi, Li

Thank you very much for reporting the problem.

On 07.08.2019 7:21, Li Japin wrote:

I inspect the code, and find the following code in DefineRelation function:

if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
&& stmt->oncommit != ONCOMMIT_DROP)
MyProc->is_tainted = true;

For temporary table, MyProc->is_tainted might be true, I changed it as
following:

if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
|| stmt->oncommit == ONCOMMIT_DROP)
MyProc->is_tainted = true;

For temporary table, it works. I not sure the changes is right.

Sorry, it is really a bug.
My intention was to mark backend as tainted if it is creating temporary
table without ON COMMIT DROP clause (in the last case temporary table
will be local to transaction and so cause no problems with pooler).
Th right condition is:

if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
&& stmt->oncommit != ONCOMMIT_DROP)
MyProc->is_tainted = true;

You should also allow cursors without the WITH HOLD option or there is
something i'm missing?

a few questions about tainted backends:
- why the use of check_primary_key() and check_foreign_key() in
contrib/spi/refint.c make the backend tainted?
- the comment in src/backend/commands/sequence.c needs some fixes, it
seems just quickly typed

some usability problem:
- i compiled this on a debian machine with "--enable-debug
--enable-cassert --with-pgport=54313", so nothing fancy
- then make, make install, and initdb: so far so good

configuration:
listen_addresses = '*'
connection_proxies = 20

and i got this:

"""
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-h 127.0.0.1 -p 6543 postgres
psql: error: could not connect to server: no se pudo conectar con el
servidor: No existe el fichero o el directorio
¿Está el servidor en ejecución localmente y aceptando
conexiones en el socket de dominio Unix «/var/run/postgresql/.s.PGSQL.54313»?
"""

but connect at the postgres port works well
"""
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-h 127.0.0.1 -p 54313 postgres
psql (14devel)
Type "help" for help.

postgres=# \q
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-p 54313 postgres
psql (14devel)
Type "help" for help.

postgres=#
"""

PS: unix_socket_directories is setted at /tmp and because i'm not
doing this with postgres user i can use /var/run/postgresql

--
Jaime Casanova www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#65Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Jaime Casanova (#64)
Re: Built-in connection pooler

Thank your for your help.

On 05.07.2020 07:17, Jaime Casanova wrote:

You should also allow cursors without the WITH HOLD option or there is
something i'm missing?

Yes, good point.

a few questions about tainted backends:
- why the use of check_primary_key() and check_foreign_key() in
contrib/spi/refint.c make the backend tainted?

I think this is because without it contrib test is not passed with
connection pooler.
This extension uses static variables which are assumed to be session
specific  but in case f connection pooler are shared by all backends.

- the comment in src/backend/commands/sequence.c needs some fixes, it
seems just quickly typed

Sorry, done.

some usability problem:
- i compiled this on a debian machine with "--enable-debug
--enable-cassert --with-pgport=54313", so nothing fancy
- then make, make install, and initdb: so far so good

configuration:
listen_addresses = '*'
connection_proxies = 20

and i got this:

"""
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-h 127.0.0.1 -p 6543 postgres
psql: error: could not connect to server: no se pudo conectar con el
servidor: No existe el fichero o el directorio
¿Está el servidor en ejecución localmente y aceptando
conexiones en el socket de dominio Unix «/var/run/postgresql/.s.PGSQL.54313»?
"""

but connect at the postgres port works well
"""
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-h 127.0.0.1 -p 54313 postgres
psql (14devel)
Type "help" for help.

postgres=# \q
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-p 54313 postgres
psql (14devel)
Type "help" for help.

postgres=#
"""

PS: unix_socket_directories is setted at /tmp and because i'm not
doing this with postgres user i can use /var/run/postgresql

Looks like for some reasons your Postgres was not configured to accept
TCP connection.
It accepts only local connections through Unix sockets.
But pooler is not listening unix sockets because there is absolutely no
sense in pooling local connections.

I have done the same steps as you and have no problem to access pooler:

knizhnik@xps:~/postgresql.vanilla$ psql postgres -h 127.0.0.1 -p 6543
psql (14devel)
Type "help" for help.

postgres=# \q

Please notice that if I specify some unexisted port, then I get error
message which is different with yours:

knizhnik@xps:~/postgresql.vanilla$ psql postgres -h 127.0.0.1 -p 65433
psql: error: could not connect to server: could not connect to server:
Connection refused
    Is the server running on host "127.0.0.1" and accepting
    TCP/IP connections on port 65433?

So Postgres is not mentioning unix socket path in this case. It makes me
think that your server is not accepting TCP connections at all (despite to

listen_addresses = '*'

)

#66Anna Akenteva
a.akenteva@postgrespro.ru
In reply to: Jaime Casanova (#64)
1 attachment(s)
Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT ... USING INDEX ...

Hello, hackers!

I'd like to propose a feature for changing a constraint's index. The
provided patch allows to do it for EXCLUDE, UNIQUE, PRIMARY KEY and
FOREIGN KEY constraints.

Feature description:
ALTER TABLE ... ALTER CONSTRAINT ... USING INDEX ...
Replace a constraint's index with another sufficiently similar index.

Use cases:
- Removing index bloat [1]/messages/by-id/CABwTF4UxTg+kERo1Nd4dt+H2miJoLPcASMFecS1-XHijABOpPg@mail.gmail.com (now also achieved by REINDEX CONCURRENTLY)
- Swapping a normal index for an index with INCLUDED columns, or vice
versa

Example of use:
CREATE TABLE target_tbl (
id integer PRIMARY KEY,
info text
);
CREATE TABLE referencing_tbl (
id_ref integer REFERENCES target_tbl (id)
);
-- Swapping primary key's index for an equivalent index,
-- but with INCLUDE-d attributes.
CREATE UNIQUE INDEX new_idx ON target_tbl (id) INCLUDE (info);
ALTER TABLE target_tbl ALTER CONSTRAINT target_tbl_pkey USING INDEX
new_idx;
ALTER TABLE referencing_tbl ALTER CONSTRAINT referencing_tbl_id_ref_fkey
USING INDEX new_idx;
DROP INDEX target_tbl_pkey;

I'd like to hear your feedback on this feature.
Also, some questions:
1) If the index supporting a UNIQUE or PRIMARY KEY constraint is
changed, should foreign keys also automatically switch to the new index?
Or should the user switch it manually, by using ALTER CONSTRAINT USING
INDEX on the foreign key?
2) Whose name should change to fit the other - constraint's or index's?

[1]: /messages/by-id/CABwTF4UxTg+kERo1Nd4dt+H2miJoLPcASMFecS1-XHijABOpPg@mail.gmail.com
/messages/by-id/CABwTF4UxTg+kERo1Nd4dt+H2miJoLPcASMFecS1-XHijABOpPg@mail.gmail.com

Attachments:

alter_con_idx_v1.patchtext/x-diff; name=alter_con_idx_v1.patchDownload
diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 991323d3471..341b5631ecc 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -745,6 +745,13 @@ CREATE TABLE products (
    <para>
     Adding a unique constraint will automatically create a unique B-tree
     index on the column or group of columns listed in the constraint.
+    If required, you can later use the <literal>ALTER CONSTRAINT ... USING INDEX</literal>
+    clause of the <literal>ALTER TABLE</literal> command to replace this index
+    with another unique B-tree index that enforces this constraint over the
+    same key columns. In this case, the constraint name is changed accordingly.
+   </para>
+
+   <para>
     A uniqueness restriction covering only some rows cannot be written as
     a unique constraint, but it is possible to enforce such a restriction by
     creating a unique <link linkend="indexes-partial">partial index</link>.
@@ -821,6 +828,10 @@ CREATE TABLE example (
     Adding a primary key will automatically create a unique B-tree index
     on the column or group of columns listed in the primary key, and will
     force the column(s) to be marked <literal>NOT NULL</literal>.
+    If required, you can later use the <literal>ALTER CONSTRAINT ... USING INDEX</literal>
+    clause of the <literal>ALTER TABLE</literal> command to replace this index
+    with another unique B-tree index that enforces this constraint over the
+    same key columns. In this case, the constraint name is changed accordingly.
    </para>
 
    <para>
@@ -1112,6 +1123,10 @@ CREATE TABLE circles (
    <para>
     Adding an exclusion constraint will automatically create an index
     of the type specified in the constraint declaration.
+    If required, you can later use the <literal>ALTER CONSTRAINT ... USING INDEX</literal>
+    clause of the <literal>ALTER TABLE</literal> command to replace this index
+    with another index of the same type that enforces this constraint over the
+    same key columns. In this case, the constraint name is changed accordingly.
    </para>
   </sect2>
  </sect1>
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index b2eb7097a95..49b6deef4d4 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -57,6 +57,7 @@ ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
     ADD <replaceable class="parameter">table_constraint</replaceable> [ NOT VALID ]
     ADD <replaceable class="parameter">table_constraint_using_index</replaceable>
     ALTER CONSTRAINT <replaceable class="parameter">constraint_name</replaceable> [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
+    ALTER CONSTRAINT <replaceable class="parameter">constraint_name</replaceable> [USING INDEX <replaceable class="parameter">index_name</replaceable>]
     VALIDATE CONSTRAINT <replaceable class="parameter">constraint_name</replaceable>
     DROP CONSTRAINT [ IF EXISTS ]  <replaceable class="parameter">constraint_name</replaceable> [ RESTRICT | CASCADE ]
     DISABLE TRIGGER [ <replaceable class="parameter">trigger_name</replaceable> | ALL | USER ]
@@ -486,7 +487,7 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
    </varlistentry>
 
    <varlistentry>
-    <term><literal>ALTER CONSTRAINT</literal></term>
+    <term><literal>ALTER CONSTRAINT</literal> <replaceable class="parameter">constraint_name</replaceable> [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]</term>
     <listitem>
      <para>
       This form alters the attributes of a constraint that was previously
@@ -495,6 +496,18 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+   <varlistentry>
+    <term><literal>ALTER CONSTRAINT</literal> <replaceable class="parameter">constraint_name</replaceable> [USING INDEX <replaceable class="parameter">index_name</replaceable>]</term>
+    <listitem>
+     <para>
+      For uniqueness, primary key, and exclusion constraints, this form
+      replaces the original index and renames the constraint accordingly.
+      The new index must use the same access method and enforce the constraint
+      over the same key columns as the original index.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry>
     <term><literal>VALIDATE CONSTRAINT</literal></term>
     <listitem>
@@ -1539,6 +1552,31 @@ ALTER TABLE distributors ADD PRIMARY KEY (dist_id);
 </programlisting>
   </para>
 
+  <para>
+   To add a unique constraint to a table:
+<programlisting>
+CREATE TABLE products (
+    product_no integer CONSTRAINT must_be_different UNIQUE,
+    name text,
+    price numeric
+);
+</programlisting>
+  </para>
+
+  <para>
+   To create a different index on the same column as the original
+   index and alter the constraint to use the new index:
+
+<programlisting>
+CREATE UNIQUE INDEX must_be_different_new
+    ON products USING BTREE(product_no);
+
+ALTER TABLE products
+    ALTER CONSTRAINT must_be_different
+    USING INDEX must_be_different_new;
+</programlisting>
+  </para>
+
   <para>
    To move a table to a different tablespace:
 <programlisting>
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index f79044f39fc..e27c62ee389 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -328,6 +328,7 @@ static void AlterSeqNamespaces(Relation classRel, Relation rel,
 							   LOCKMODE lockmode);
 static ObjectAddress ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 										   bool recurse, bool recursing, LOCKMODE lockmode);
+static ObjectAddress ATExecAlterConstraintUsingIndex(Relation rel, AlterTableCmd *cmd);
 static ObjectAddress ATExecValidateConstraint(Relation rel, char *constrName,
 											  bool recurse, bool recursing, LOCKMODE lockmode);
 static int	transformColumnNameList(Oid relId, List *colList,
@@ -3782,6 +3783,7 @@ AlterTableGetLockLevel(List *cmds)
 				 */
 			case AT_ColumnDefault:
 			case AT_AlterConstraint:
+			case AT_AlterConstraintUsingIndex:
 			case AT_AddIndex:	/* from ADD CONSTRAINT */
 			case AT_AddIndexConstraint:
 			case AT_ReplicaIdentity:
@@ -4228,6 +4230,7 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
 			pass = AT_PASS_MISC;
 			break;
 		case AT_AlterConstraint:	/* ALTER CONSTRAINT */
+		case AT_AlterConstraintUsingIndex:	/* ALTER CONSTRAINT */
 			ATSimplePermissions(rel, ATT_TABLE);
 			pass = AT_PASS_MISC;
 			break;
@@ -4499,6 +4502,9 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab, Relation rel,
 		case AT_AlterConstraint:	/* ALTER CONSTRAINT */
 			address = ATExecAlterConstraint(rel, cmd, false, false, lockmode);
 			break;
+		case AT_AlterConstraintUsingIndex:
+			address = ATExecAlterConstraintUsingIndex(rel, cmd);
+			break;
 		case AT_ValidateConstraint: /* VALIDATE CONSTRAINT */
 			address = ATExecValidateConstraint(rel, cmd->name, false, false,
 											   lockmode);
@@ -9546,6 +9552,484 @@ tryAttachPartitionForeignKey(ForeignKeyCacheInfo *fk,
 	return true;
 }
 
+/*
+ * RelationGetIndexClass -- get OIDs of operator classes for each index column
+ */
+static oidvector *
+RelationGetIndexClass(Relation index)
+{
+	Datum		indclassDatum;
+	bool		isnull;
+
+	Assert(index != NULL);
+	Assert(index->rd_indextuple != NULL);
+
+	/*
+	 * indclass cannot be referenced directly through the C struct, because it
+	 * comes after the variable-width indkey field.  Must extract the datum
+	 * the hard way...
+	 */
+	indclassDatum = SysCacheGetAttr(INDEXRELID, index->rd_indextuple,
+									Anum_pg_index_indclass, &isnull);
+	Assert(!isnull);
+
+	return (oidvector *) DatumGetPointer(indclassDatum);
+}
+
+/*
+ * Check if oldIndex can be replaced by newIndex in a constraint
+ * Returns NULL if indexes are compatible or a string with a
+ * description of the incompatibility reasons
+ */
+static const char *
+indexExchangeabilityError(Relation oldIndex, Relation newIndex)
+{
+	Form_pg_index oldIndexForm = oldIndex->rd_index,
+				newIndexForm = newIndex->rd_index;
+	List	   *oldPredicate = RelationGetIndexPredicate(oldIndex),
+			   *newPredicate = RelationGetIndexPredicate(newIndex);
+	oidvector  *oldIndexClass = RelationGetIndexClass(oldIndex),
+			   *newIndexClass = RelationGetIndexClass(newIndex);
+	int			i;
+
+	/* This function might need modificatoins if pg_index gets new fields */
+	Assert(Natts_pg_index == 20);
+
+	if (RelationGetForm(oldIndex)->relam != RelationGetForm(newIndex)->relam)
+		return "Indexes must have the same access methods";
+
+	/*
+	 * We do not want to replace the corresponding partitioned index and/or
+	 * corresponding partition indexes of other partitions.
+	 */
+
+	if (RelationGetForm(oldIndex)->relkind == RELKIND_PARTITIONED_INDEX ||
+		RelationGetForm(newIndex)->relkind == RELKIND_PARTITIONED_INDEX)
+		return "One of the indexes is a partitioned index";
+
+	if (RelationGetForm(oldIndex)->relispartition ||
+		RelationGetForm(newIndex)->relispartition)
+		return "One of the indexes is a partition index";
+
+	if (!oldIndexForm->indislive || !newIndexForm->indislive)
+		return "One of the indexes is being dropped";
+	if (!oldIndexForm->indisvalid || !newIndexForm->indisvalid)
+		return "One of the indexes is not valid for queries";
+	if (!oldIndexForm->indisready || !newIndexForm->indisready)
+		return "One of the indexes is not ready for inserts";
+
+	if (oldIndexForm->indisunique != newIndexForm->indisunique)
+		return "Both indexes must be either unique or not";
+
+	if (IndexRelationGetNumberOfKeyAttributes(oldIndex) !=
+		IndexRelationGetNumberOfKeyAttributes(newIndex))
+		return "Indexes must have the same number of key columns";
+
+	for (i = 0; i < IndexRelationGetNumberOfKeyAttributes(oldIndex); i++)
+	{
+		if (oldIndexForm->indkey.values[i] != newIndexForm->indkey.values[i])
+			return "Indexes must have the same key columns";
+
+		/*
+		 * A deterministic comparison considers strings that are not byte-wise
+		 * equal to be unequal even if they are considered logically equal by
+		 * the comparison. Comparison that is not deterministic can make the
+		 * collation be, say, case- or accent-insensitive. Therefore indexes
+		 * must have the same collation.
+		 */
+		if (oldIndex->rd_indcollation[i] != newIndex->rd_indcollation[i])
+			return "Indexes must have the same collation";
+
+		if (oldIndexClass->values[i] != newIndexClass->values[i])
+			return "Indexes must have the same operator class";
+
+		if (oldIndex->rd_indoption[i] != newIndex->rd_indoption[i])
+			return "Indexes must have the same per-column flag bits";
+	}
+
+	if (!equal(RelationGetIndexExpressions(oldIndex),
+			   RelationGetIndexExpressions(newIndex)))
+		return "Indexes must have the same non-column attributes";
+
+	if (!equal(oldPredicate, newPredicate))
+	{
+		if (oldPredicate && newPredicate)
+			return "Indexes must have the same partial index predicates";
+		else
+			return "Either none or both indexes must have partial index predicates";
+	}
+
+	/*
+	 * Check that the deferrable constraint will not "invalidate" the replica
+	 * identity index. (For each constraint index pg_index.indimmediate !=
+	 * pg_constraint.condeferrable. Therefore for a deferrable constraint
+	 * pg_index.indimmediate = false and such indexes cannot be used as
+	 * replica identity indexes.)
+	 */
+	if (!oldIndexForm->indimmediate && newIndexForm->indisreplident)
+		return "Deferrable constraint cannot use replica identity index";
+
+	return NULL;
+}
+
+/*
+ * Update all changed properties for the old / new constraint index.
+ */
+static void
+AlterConstraintUpdateIndex(Form_pg_constraint currcon, Relation pg_index,
+						   Oid indexOid, bool is_new_constraint_index)
+{
+	HeapTuple	indexTuple;
+	Form_pg_index indexForm;
+	bool		dirty;
+
+	Assert(currcon != NULL);
+	Assert(pg_index != NULL);
+
+	indexTuple = SearchSysCacheCopy1(INDEXRELID, ObjectIdGetDatum(indexOid));
+	if (!HeapTupleIsValid(indexTuple))
+		elog(ERROR, "cache lookup failed for index %u", indexOid);
+	indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
+
+	dirty = false;
+
+	/*
+	 * If this is an exclusion constraint, set pg_index.indisexclusion to true
+	 * for the new index constraint and false for the old index constraint.
+	 */
+	if (currcon->contype == CONSTRAINT_EXCLUSION &&
+		(indexForm->indisexclusion != is_new_constraint_index))
+	{
+		indexForm->indisexclusion = is_new_constraint_index;
+		dirty = true;
+	}
+
+	/*
+	 * If this is a primary key, set pg_index.indisprimary to true for the new
+	 * index constraint and false for the old index constraint.
+	 */
+	if (currcon->contype == CONSTRAINT_PRIMARY &&
+		(indexForm->indisprimary != is_new_constraint_index))
+	{
+		indexForm->indisprimary = is_new_constraint_index;
+		dirty = true;
+	}
+
+	/*
+	 * If the constraint is deferrable, set pg_index.indimmediate to false for
+	 * the new index constraint and true for the old index constraint. (For
+	 * each constraint index pg_index.indimmediate !=
+	 * pg_constraint.condeferrable. pg_index.indimmediate is true for each
+	 * stand-alone index without constraint.)
+	 */
+	if (currcon->condeferrable &&
+		(indexForm->indimmediate == is_new_constraint_index))
+	{
+		indexForm->indimmediate = !is_new_constraint_index;
+		dirty = true;
+	}
+
+	if (dirty)
+	{
+		CatalogTupleUpdate(pg_index, &indexTuple->t_self, indexTuple);
+
+		InvokeObjectPostAlterHook(IndexRelationId, indexOid, 0);
+	}
+
+	heap_freetuple(indexTuple);
+}
+
+/*
+ * For this index:
+ * - create auto dependencies on simply-referenced columns;
+ * - if there are no simply-referenced columns, give the index an auto
+ *   dependency on the whole table.
+ *
+ * This function is based on a part of index_create().
+ */
+static void
+AddRelationDependenciesToIndex(Relation index)
+{
+	Form_pg_index indexForm = index->rd_index;
+	ObjectAddress myself,
+				referenced;
+	int			i,
+				attrnum;
+	bool		have_simple_col = false;
+
+	ObjectAddressSet(myself, RelationRelationId, RelationGetRelid(index));
+
+	/* Create auto dependencies on simply-referenced columns */
+	for (i = 0; i < IndexRelationGetNumberOfAttributes(index); i++)
+	{
+		attrnum = indexForm->indkey.values[i];
+		if (attrnum != 0)
+		{
+			ObjectAddressSubSet(referenced, RelationRelationId,
+								indexForm->indrelid, attrnum);
+			recordDependencyOn(&myself, &referenced, DEPENDENCY_AUTO);
+
+			have_simple_col = true;
+		}
+	}
+
+	/*
+	 * If there are no simply-referenced columns, give the index an auto
+	 * dependency on the whole table.
+	 */
+	if (!have_simple_col)
+	{
+		ObjectAddressSet(referenced, RelationRelationId, indexForm->indrelid);
+		recordDependencyOn(&myself, &referenced, DEPENDENCY_AUTO);
+	}
+}
+
+/*
+ * ALTER TABLE ALTER CONSTRAINT USING INDEX
+ *
+ * Replace an index of a constraint.
+ *
+ * Currently only works for UNIQUE, EXCLUSION and PRIMARY constraints.
+ * Index can be replaced only with index of same type and with same
+ * configuration (attributes, columns, etc.).
+ *
+ * If the constraint is modified, returns its address; otherwise, return
+ * InvalidObjectAddress.
+ */
+static ObjectAddress
+ATExecAlterConstraintUsingIndex(Relation rel, AlterTableCmd *cmd)
+{
+	Relation	conrel;
+	SysScanDesc scan;
+	ScanKeyData key[3];
+	HeapTuple	contuple;
+	Form_pg_constraint currcon = NULL;
+	Oid			indexOid,
+				oldIndexOid;
+	Relation	indexRel,
+				oldIndexRel;
+	VariableShowStmt *indexName;
+	ObjectAddress address;
+	const char *replaceabilityCheckResult;
+	HeapTuple	copyTuple;
+	Form_pg_constraint copy_con;
+	ObjectAddress constraint_addr,
+				index_addr;
+	List	   *indexprs,
+			   *indpred;
+
+	indexName = castNode(VariableShowStmt, cmd->def);
+
+	conrel = table_open(ConstraintRelationId, RowExclusiveLock);
+
+	/*
+	 * Find and check the target constraint
+	 */
+	ScanKeyInit(&key[0],
+				Anum_pg_constraint_conrelid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(RelationGetRelid(rel)));
+	ScanKeyInit(&key[1],
+				Anum_pg_constraint_contypid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(InvalidOid));
+	ScanKeyInit(&key[2],
+				Anum_pg_constraint_conname,
+				BTEqualStrategyNumber, F_NAMEEQ,
+				CStringGetDatum(cmd->name));
+	scan = systable_beginscan(conrel, ConstraintRelidTypidNameIndexId,
+							  true, NULL, 3, key);
+
+	/* There can be at most one matching row */
+	if (!HeapTupleIsValid(contuple = systable_getnext(scan)))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("constraint \"%s\" of relation \"%s\" does not exist",
+						cmd->name, RelationGetRelationName(rel))));
+
+	currcon = (Form_pg_constraint) GETSTRUCT(contuple);
+	if (currcon->contype != CONSTRAINT_UNIQUE &&
+		currcon->contype != CONSTRAINT_EXCLUSION &&
+		currcon->contype != CONSTRAINT_PRIMARY &&
+		currcon->contype != CONSTRAINT_FOREIGN)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("constraint \"%s\" of relation \"%s\" is not a primary "
+						"key, unique constraint, exclusion constraint "
+						"or foreign constraint",
+						cmd->name, RelationGetRelationName(rel))));
+
+	oldIndexOid = currcon->conindid;
+	oldIndexRel = index_open(oldIndexOid, ShareLock);
+
+	/* Check that the index exists */
+	indexOid = get_relname_relid(indexName->name, rel->rd_rel->relnamespace);
+	if (!OidIsValid(indexOid))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("index \"%s\" for table \"%s\" does not exist",
+						indexName->name, RelationGetRelationName(rel))));
+
+	indexRel = index_open(indexOid, ShareLock);
+
+	/* Check that the index is on the relation we're altering. */
+	if ((indexRel->rd_index == NULL ||
+		indexRel->rd_index->indrelid != RelationGetRelid(rel)) &&
+		currcon->contype != CONSTRAINT_FOREIGN)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("\"%s\" is not an index for table \"%s\"",
+						indexName->name, RelationGetRelationName(rel))));
+
+	/*
+	 * Check if our constraint uses this index (and therefore everything is
+	 * already in order).
+	 */
+	if (oldIndexOid == indexOid)
+	{
+		ereport(NOTICE,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("constraint \"%s\" already uses index \"%s\", skipping",
+						cmd->name, indexName->name)));
+		address = InvalidObjectAddress;
+		goto cleanup;
+	}
+
+	/* Check if another constraint already uses this index */
+	if (currcon->contype != CONSTRAINT_FOREIGN &&
+		OidIsValid(get_index_constraint(indexOid)))
+		ereport(ERROR,
+				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				 errmsg("index \"%s\" is already associated with a constraint",
+						indexName->name)));
+
+	/* Check if the new index is compatible with our constraint */
+	if ((replaceabilityCheckResult =
+		 indexExchangeabilityError(oldIndexRel, indexRel)))
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("index in constraint \"%s\" cannot be replaced by "
+						"\"%s\"",
+						cmd->name, indexName->name),
+				 errdetail("%s.", replaceabilityCheckResult)));
+
+	/* OK, change the index for this constraint */
+	indexprs = RelationGetIndexExpressions(indexRel);
+	indpred = RelationGetIndexPredicate(indexRel);
+
+	/*
+	 * Now update the catalog, while we have the door open.
+	 */
+	copyTuple = heap_copytuple(contuple);
+	copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
+	copy_con->conindid = indexOid;
+	if (currcon->contype == CONSTRAINT_EXCLUSION ||
+		currcon->contype == CONSTRAINT_PRIMARY ||
+		currcon->contype == CONSTRAINT_UNIQUE)
+	{
+		/* Rename the constraint to match the index's name */
+		ereport(NOTICE,
+				(errmsg("ALTER TABLE / ALTER CONSTRAINT USING INDEX will"
+						" rename constraint \"%s\" to \"%s\"",
+						cmd->name, indexName->name)));
+		namestrcpy(&(copy_con->conname), indexName->name);
+
+		/*
+		* If the replaced index is a replica identity index, remind the user
+		* so that he can change table replica identity and only then drop the
+		* "unnecessary" index.
+		*/
+		if (oldIndexRel->rd_index->indisreplident)
+			ereport(NOTICE,
+					(errmsg("replaced index \"%s\" is still chosen as replica"
+							" identity",
+							RelationGetRelationName(oldIndexRel))));
+
+	}
+	CatalogTupleUpdate(conrel, &copyTuple->t_self, copyTuple);
+
+	InvokeObjectPostAlterHook(ConstraintRelationId, currcon->oid, 0);
+
+	heap_freetuple(copyTuple);
+
+	/* Update old and new indexes if necessary */
+	if (currcon->contype == CONSTRAINT_EXCLUSION ||
+		currcon->contype == CONSTRAINT_PRIMARY ||
+		currcon->condeferrable)
+	{
+		Relation	pg_index;
+
+		pg_index = table_open(IndexRelationId, RowExclusiveLock);
+		AlterConstraintUpdateIndex(currcon, pg_index, indexOid, true);
+		AlterConstraintUpdateIndex(currcon, pg_index, oldIndexOid, false);
+		table_close(pg_index, RowExclusiveLock);
+	}
+
+	/* Update dependencies */
+
+	/* For foreign constraints */
+	if (currcon->contype == CONSTRAINT_FOREIGN)
+	{
+		changeDependencyFor(ConstraintRelationId, currcon->oid,
+							RelationRelationId, oldIndexOid, indexOid);
+	}
+	/* For exclusion, primary and unique constraints */
+	else
+	{
+		/* The old index is now independent on any constraints */
+		deleteDependencyRecordsForClass(RelationRelationId, oldIndexOid,
+										ConstraintRelationId,
+										DEPENDENCY_INTERNAL);
+
+		/*
+		 * The old index now depends on its simply-referenced columns and/or
+		 * its table.
+		 */
+		AddRelationDependenciesToIndex(oldIndexRel);
+
+		/* The new index now depends on our constraint */
+		ObjectAddressSet(constraint_addr, ConstraintRelationId, currcon->oid);
+		ObjectAddressSet(index_addr, RelationRelationId, indexOid);
+		recordDependencyOn(&index_addr, &constraint_addr, DEPENDENCY_INTERNAL);
+
+		/*
+		 * The new index is now independent on its simply-referenced columns
+		 * and/or its table.
+		 */
+		deleteDependencyRecordsForClass(RelationRelationId, indexOid,
+										RelationRelationId, DEPENDENCY_AUTO);
+
+		/* Restore dependencies on anything mentioned in index expressions */
+		if (indexprs)
+			recordDependencyOnSingleRelExpr(&index_addr,
+											(Node *) indexprs,
+											RelationGetRelid(rel),
+											DEPENDENCY_NORMAL,
+											DEPENDENCY_AUTO, false);
+
+		/* Restore dependencies on anything mentioned in predicate */
+		if (indpred)
+			recordDependencyOnSingleRelExpr(&index_addr,
+											(Node *) indpred,
+											RelationGetRelid(rel),
+											DEPENDENCY_NORMAL,
+											DEPENDENCY_AUTO, false);
+	}
+
+	/* Invalidate relcache so that others see the new attributes */
+	CacheInvalidateRelcache(rel);
+
+	ObjectAddressSet(address, ConstraintRelationId, currcon->oid);
+
+cleanup:
+	systable_endscan(scan);
+	table_close(conrel, RowExclusiveLock);
+
+	index_close(oldIndexRel, NoLock);
+	index_close(indexRel, NoLock);
+
+	return address;
+}
 
 /*
  * ALTER TABLE ALTER CONSTRAINT
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4ff35095b85..8ff755df1b1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -2337,6 +2337,17 @@ alter_table_cmd:
 									NULL, NULL, yyscanner);
 					$$ = (Node *)n;
 				}
+			/* ALTER TABLE <name> ALTER CONSTRAINT ... USING INDEX */
+			| ALTER CONSTRAINT name USING INDEX name
+				{
+					AlterTableCmd *n = makeNode(AlterTableCmd);
+					VariableShowStmt *c = makeNode(VariableShowStmt);
+					n->subtype = AT_AlterConstraintUsingIndex;
+					n->name = $3;
+					n->def = (Node *) c;
+					c->name = $6;
+					$$ = (Node *)n;
+				}
 			/* ALTER TABLE <name> VALIDATE CONSTRAINT ... */
 			| VALIDATE CONSTRAINT name
 				{
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 5e1ffafb91b..68f2c358a65 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -1803,6 +1803,7 @@ typedef enum AlterTableType
 	AT_ReAddConstraint,			/* internal to commands/tablecmds.c */
 	AT_ReAddDomainConstraint,	/* internal to commands/tablecmds.c */
 	AT_AlterConstraint,			/* alter constraint */
+	AT_AlterConstraintUsingIndex,	/* alter constraint using index */
 	AT_ValidateConstraint,		/* validate constraint */
 	AT_ValidateConstraintRecurse,	/* internal to commands/tablecmds.c */
 	AT_AddIndexConstraint,		/* add constraint using existing index */
diff --git a/src/test/regress/input/constraints.source b/src/test/regress/input/constraints.source
index c325b2753d4..00e5c5c1ae6 100644
--- a/src/test/regress/input/constraints.source
+++ b/src/test/regress/input/constraints.source
@@ -552,3 +552,714 @@ DROP DOMAIN constraint_comments_dom;
 
 DROP ROLE regress_constraint_comments;
 DROP ROLE regress_constraint_comments_noaccess;
+
+--
+--
+--
+-- ALTER CONSTRAINT ... USING INDEX
+--
+--
+--
+
+CREATE FUNCTION show_some_indexes_from_relation(
+	searched_relname name,
+	searched_indnames name[]
+)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	index_relkind "char",
+	index_relispartition boolean,
+	indisunique boolean,
+	indisprimary boolean,
+	indisexclusion boolean,
+	indkey int2vector,
+	indislive boolean,
+	indisvalid boolean,
+	indisready boolean,
+	indoption int2vector,
+	indcollation oidvector,
+	indimmediate boolean,
+	indisreplident boolean,
+	depends_on_table boolean,
+	depends_on_simple_columns boolean,
+	depends_on_constraint boolean,
+	conname name
+)
+AS $$
+	SELECT r.relname, i.relname, i.relkind, i.relispartition, indisunique,
+		   indisprimary, indisexclusion, indkey, indislive, indisvalid,
+		   indisready, indoption, indcollation, indimmediate, indisreplident,
+		   EXISTS(SELECT *
+				  FROM pg_depend
+				  WHERE classid = 'pg_class'::regclass AND
+						objid = indexrelid AND
+						refclassid = 'pg_class'::regclass AND
+						refobjid = indrelid AND
+						refobjsubid = 0),
+		   EXISTS(SELECT *
+				  FROM pg_depend
+				  WHERE classid = 'pg_class'::regclass AND
+						objid = indexrelid AND
+						refclassid = 'pg_class'::regclass AND
+						refobjid = indrelid AND
+						refobjsubid != 0),
+		   EXISTS(SELECT *
+				  FROM pg_depend
+				  WHERE classid = 'pg_class'::regclass AND
+						objid = indexrelid AND
+						refclassid = 'pg_constraint'::regclass AND
+						refobjid = c.oid),
+		   conname
+	FROM pg_index
+	JOIN pg_class i ON indexrelid = i.oid
+	JOIN pg_class r ON indrelid = r.oid
+	LEFT JOIN pg_constraint c ON indexrelid = c.conindid
+	WHERE r.relname = searched_relname AND
+		  (searched_indnames IS NULL OR i.relname = ANY (searched_indnames))
+	ORDER BY indkey, i.relname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_indexes_from_relation(searched_relname name)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	index_relkind "char",
+	index_relispartition boolean,
+	indisunique boolean,
+	indisprimary boolean,
+	indisexclusion boolean,
+	indkey int2vector,
+	indislive boolean,
+	indisvalid boolean,
+	indisready boolean,
+	indoption int2vector,
+	indcollation oidvector,
+	indimmediate boolean,
+	indisreplident boolean,
+	depends_on_table boolean,
+	depends_on_simple_columns boolean,
+	depends_on_constraint boolean,
+	conname name
+)
+AS $$
+	SELECT * FROM show_some_indexes_from_relation(searched_relname, NULL);
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_some_index_exprs_pred(
+	searched_relname name,
+	searched_indnames name[]
+)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	indpred text,
+	indexprs text
+)
+AS $$
+	SELECT
+		r.relname,
+		i.relname,
+		pg_get_expr(indpred, indrelid, true),
+		pg_get_expr(indexprs, indrelid, true)
+	FROM pg_index
+	JOIN pg_class i ON indexrelid = i.oid
+	JOIN pg_class r ON indrelid = r.oid
+	WHERE r.relname = searched_relname AND
+		  (searched_indnames IS NULL OR i.relname = ANY (searched_indnames))
+	ORDER BY indexprs, indpred, indkey, i.relname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_index_exprs_pred(searched_relname name)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	indpred text,
+	indexprs text
+)
+AS $$
+	SELECT * FROM show_some_index_exprs_pred(searched_relname, NULL);
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_constraints_named_like(searched_conname name)
+RETURNS TABLE
+(
+	conname name,
+	contype "char",
+	conkey smallint[],
+	condeferrable boolean
+)
+AS $$
+	SELECT conname, contype, conkey, condeferrable
+	FROM pg_constraint
+	WHERE conname LIKE searched_conname
+	ORDER BY conkey, conname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_index_dependencies_on_table_columns
+(
+	searched_indnames name[]
+)
+RETURNS TABLE
+(
+	indname name,
+	indnatts smallint,
+	indnkeyatts smallint,
+	indkey int2vector,
+	attnum smallint,
+	attname name
+)
+AS $$
+	SELECT relname, indnatts, indnkeyatts, indkey, attnum, attname
+	FROM pg_index
+	JOIN pg_class ON indexrelid = pg_class.oid
+	JOIN pg_depend ON indexrelid = objid
+	JOIN pg_attribute ON attrelid = indrelid
+	WHERE relname = ANY (searched_indnames) AND
+		  classid = 'pg_class'::regclass AND
+		  refclassid = 'pg_class'::regclass AND
+		  refobjid = indrelid AND
+		  refobjsubid != 0 AND
+		  refobjsubid = attnum
+	ORDER BY relname, refobjsubid;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_alteridx_index_dependencies()
+RETURNS TABLE
+(
+	indname name,
+	referenced_indname name
+)
+AS $$
+	SELECT c.relname, ref_c.relname
+	FROM
+		pg_index AS i
+		JOIN pg_class AS c ON c.oid = i.indexrelid
+		JOIN pg_depend ON objid = i.indexrelid
+		JOIN pg_index AS ref_i ON refobjid = ref_i.indexrelid
+		JOIN pg_class AS ref_c ON ref_c.oid = ref_i.indexrelid
+	WHERE classid = 'pg_class'::regclass
+		AND refclassid = 'pg_class'::regclass
+		AND c.relname like '%alteridx%'
+	ORDER BY c.relname, ref_c.relname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_alteridx_constraint_dependencies()
+RETURNS TABLE
+(
+	conname name,
+	referenced_conname name
+)
+AS $$
+	SELECT con.conname, ref_con.conname
+	FROM
+		pg_constraint AS con
+		JOIN pg_depend ON objid = con.oid
+		JOIN pg_constraint AS ref_con ON refobjid = ref_con.oid
+	WHERE
+		classid = 'pg_constraint'::regclass AND
+		refclassid = 'pg_constraint'::regclass AND
+		con.conname like '%alteridx%'
+	ORDER BY con.conname, ref_con.conname;
+$$ LANGUAGE SQL;
+--
+--
+--
+CREATE TABLE alteridx_orig(
+id int PRIMARY KEY,
+uniq int CONSTRAINT alteridx_orig_uniq_key UNIQUE NOT NULL,
+parity int,
+msg text UNIQUE,
+ir int4range,
+partition_key int,
+EXCLUDE using gist(ir with &&),
+EXCLUDE USING btree((id + uniq) WITH =) WHERE (id > 2),
+EXCLUDE ((id + uniq) WITH =) WHERE (parity < 4),
+CONSTRAINT alteridx_orig_double UNIQUE(id, uniq),
+CONSTRAINT alteridx_id_key_deferrable UNIQUE (id) DEFERRABLE,
+CHECK (parity > -10));
+
+CREATE TABLE partitioned_orig_alteridx(
+id int,
+uniq int,
+parity int,
+msg text,
+ir int4range,
+partition_key int UNIQUE,
+UNIQUE (partition_key, id))
+PARTITION BY RANGE (partition_key);
+
+ALTER TABLE partitioned_orig_alteridx ATTACH PARTITION alteridx_orig
+FOR VALUES FROM (0) TO (20);
+--
+--
+CREATE TABLE another_alteridx(
+id int PRIMARY KEY,
+uniq int UNIQUE,
+parity int,
+msg text,
+ir int4range,
+EXCLUDE USING gist(ir with &&),
+EXCLUDE USING btree((id + 4) WITH =));
+--
+--
+CREATE TABLE third_alteridx(
+	id1 int,
+	id2 int,
+	PRIMARY KEY (id1, id2)
+);
+--
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+SELECT * FROM show_constraints_named_like('alteridx_%');
+
+SELECT * FROM show_indexes_from_relation('partitioned_orig_alteridx');
+SELECT * FROM show_constraints_named_like('partitioned_%');
+
+SELECT * FROM show_indexes_from_relation('another_alteridx');
+SELECT * FROM show_constraints_named_like('another_%');
+--
+--
+-- Checking that constraints work before index replacement
+--
+--
+INSERT INTO alteridx_orig SELECT n, n, n%2, CHR(62+n) || CHR(63+n),
+int4range(2*n,2*n+1), n from generate_series(1,10) as gs(n);
+INSERT INTO another_alteridx SELECT n, n, n%2, CHR(62+n) || CHR(63+n),
+int4range(2*n,2*n+1) from generate_series(1,10) as gs(n);
+--
+INSERT INTO alteridx_orig VALUES(1, 0, 1, 'AA', int4range(102, 103), 15); -- failure here
+INSERT INTO alteridx_orig VALUES(0, 1, 1, 'AA', int4range(104, 105), 15); -- failure here
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(1, 107), 15); -- failure here
+INSERT INTO alteridx_orig VALUES(NULL, 0, 1, 'AA', int4range(102, 107), 15); -- failure here
+INSERT INTO alteridx_orig VALUES(0, NULL, 1, 'AA', int4range(102, 107), 15); -- failure here
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(102, 107), 15);
+SELECT * FROM alteridx_orig;
+--
+--
+CREATE UNIQUE INDEX alteridx_new_uniq_key ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_incl ON alteridx_orig(uniq) INCLUDE (ir);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_incl2 ON alteridx_orig(uniq) INCLUDE (parity);
+CREATE UNIQUE INDEX CONCURRENTLY alteridx_new_uniq_key_back ON alteridx_orig USING BTREE(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_pred ON alteridx_orig(uniq) WHERE parity=1;
+CREATE INDEX alteridx_new_uniq_key_no_unique ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_with_msg ON alteridx_orig(uniq, msg);
+CREATE UNIQUE INDEX alteridx_new_msg_key_ops ON alteridx_orig(msg text_pattern_ops);
+CREATE UNIQUE INDEX alteridx_new_pkey ON alteridx_orig(id);
+CREATE INDEX alteridx_new_ir_excl ON alteridx_orig using gist(ir range_ops);
+CREATE INDEX alteridx_new_expr_excl ON alteridx_orig((id + uniq)) WHERE (id > 2);
+CREATE INDEX alteridx_new_expr_excl_hash ON alteridx_orig USING hash((id + uniq)) WHERE (id > 2);
+CREATE INDEX alteridx_new_expr_excl_pred ON alteridx_orig((id + uniq)) WHERE (id > 3);
+CREATE INDEX alteridx_new_expr_excl_pred2 ON alteridx_orig((id + uniq)) WHERE (id > (3 - 1));
+CREATE INDEX alteridx_new_expr_excl_wrong ON alteridx_orig((id - uniq)) WHERE (id > 2);
+CREATE INDEX alteridx_new_expr_excl1 ON alteridx_orig((id + uniq)) WHERE (parity < 4);
+CREATE INDEX another_alteridx_new_expr_excl ON another_alteridx((id + 4));
+CREATE INDEX another_alteridx_new_expr_excl_different ON another_alteridx((id + 2 + 2));
+CREATE INDEX another_alteridx_new_expr_excl_different2 ON another_alteridx((id + (2 + 2)));
+CREATE UNIQUE INDEX alteridx_new_double ON alteridx_orig(id, uniq);
+CREATE INDEX alteridx_new_double_not_unique ON alteridx_orig(id, uniq);
+CREATE UNIQUE INDEX alteridx_id_key ON alteridx_orig(id);
+--
+CREATE UNIQUE INDEX third_alteridx_pkey_new ON third_alteridx(id1, id2);
+CREATE UNIQUE INDEX third_alteridx_pkey_opp ON third_alteridx(id2, id1);
+CREATE UNIQUE INDEX third_alteridx_pkey_single ON third_alteridx(id1);
+CREATE INDEX third_alteridx_pkey_not_unique ON third_alteridx(id1, id2);
+--
+CREATE UNIQUE INDEX alteridx_new_partition_key_key
+ON alteridx_orig(partition_key);
+CREATE UNIQUE INDEX alteridx_new_partition_key_id_key
+ON alteridx_orig(partition_key,id);
+CREATE UNIQUE INDEX partitioned_new_alteridx_partition_key_key
+ON partitioned_orig_alteridx(partition_key);
+CREATE UNIQUE INDEX partitioned_new_alteridx_partition_key_id_key
+ON partitioned_orig_alteridx(partition_key,id);
+
+--
+CREATE UNIQUE INDEX alteridx_new_uniq_key_opt ON alteridx_orig(uniq);
+UPDATE pg_index SET indoption='1'
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_opt';
+CREATE UNIQUE INDEX alteridx_new_msg_key_coll ON alteridx_orig(msg);
+UPDATE pg_index SET indcollation='12341'
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_msg_key_coll';
+--
+--
+-- Tests for unique constraint --
+--
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_new_uniq_key_incl",
+	  "alteridx_new_uniq_key_incl2",
+	  "alteridx_orig_expr_excl1",
+	  "alteridx_new_expr_excl1"}'::name[]);
+SELECT * FROM show_indexes_from_relation('partitioned_orig_alteridx');
+SELECT * FROM show_indexes_from_relation('another_alteridx');
+SELECT * FROM show_index_exprs_pred('another_alteridx');
+SELECT * FROM show_constraints_named_like('another_%');
+DROP INDEX alteridx_orig_uniq_key; -- failure here
+SELECT * FROM show_constraints_named_like('alteridx_%');
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_parity_check
+USING INDEX alteridx_new_uniq_key; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_uniq_key
+USING INDEX alteridx_orig_uniq_key;
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_uniq_key
+USING INDEX alteridx_new_uniq_key;
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+DROP INDEX alteridx_orig_uniq_key;
+DROP INDEX alteridx_new_uniq_key; -- failure here
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key
+USING INDEX alteridx_new_uniq_key_incl2;
+--
+--
+-- Checking that all dependencies on simply-referenced columns are correctly
+-- added for old constraint index (included columns may differ in the old and
+-- new constraint index).
+--
+--
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_incl2
+USING INDEX alteridx_new_uniq_key_incl;
+
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+--
+DROP INDEX alteridx_new_uniq_key;
+DROP INDEX alteridx_new_uniq_key_incl; -- failure here
+SELECT * FROM show_constraints_named_like('alteridx_%');
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_incl
+USING INDEX alteridx_new_uniq_key_back;
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_pred; -- failure here
+DROP INDEX alteridx_new_uniq_key_pred;
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_no_unique; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_msg_key
+USING INDEX alteridx_new_msg_key_coll; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_msg_key
+USING INDEX alteridx_new_msg_key_ops; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_msg_key
+USING INDEX alteridx_new_uniq_key_incl; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX another_alteridx_uniq_key; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_with_msg; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_opt; -- failure here
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_double
+USING INDEX alteridx_new_uniq_key_incl; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_double
+USING INDEX alteridx_new_double_not_unique; -- failure here
+--
+--
+-- Checking the notification if the replica identity index is no longer used in
+-- the constraint.
+--
+--
+ALTER TABLE alteridx_orig REPLICA IDENTITY USING INDEX alteridx_orig_double;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_double
+USING INDEX alteridx_new_double;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+--
+--
+-- Checking that deferrable constraints cannot use replica identity index
+--
+--
+ALTER TABLE alteridx_orig REPLICA IDENTITY USING INDEX alteridx_id_key;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_id_key_deferrable
+USING INDEX alteridx_id_key; -- failure here
+
+ALTER TABLE alteridx_orig REPLICA IDENTITY DEFAULT;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_id_key_deferrable
+USING INDEX alteridx_id_key;
+--
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+ALTER TABLE alteridx_orig DROP CONSTRAINT alteridx_orig_double; -- failure here
+ALTER TABLE alteridx_orig DROP CONSTRAINT alteridx_new_double;
+DROP INDEX alteridx_orig_double;
+DROP INDEX alteridx_new_double_not_unique;
+--
+--
+-- Tests for primary key constraint --
+--
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_pkey
+USING INDEX alteridx_new_pkey;
+SELECT * FROM show_constraints_named_like('alteridx_%');
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_pkey
+USING INDEX another_alteridx_uniq_key; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_pkey
+USING INDEX another_alteridx_pkey; -- failure here
+--
+SELECT * FROM show_indexes_from_relation('third_alteridx');
+SELECT * FROM show_constraints_named_like('third_%');
+
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_opp; -- failure here
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_not_unique; -- failure here
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_single; -- failure here
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_new;
+
+SELECT * FROM show_indexes_from_relation('third_alteridx');
+SELECT * FROM show_constraints_named_like('third_%');
+
+--
+--
+-- Tests for exclusion constraint --
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_ir_excl
+USING INDEX alteridx_new_ir_excl;
+SELECT * FROM show_constraints_named_like('alteridx_%');
+--
+ALTER TABLE alteridx_orig ADD CONSTRAINT alteridx_new_expr_excl2
+EXCLUDE ((id + uniq) WITH =) WHERE (id > 2);
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl2; -- failure here
+ALTER TABLE alteridx_orig DROP CONSTRAINT alteridx_new_expr_excl2;
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_orig_ir_excl; -- failure here
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+--
+--
+-- Checking that after simplifying the constants from index predicates some
+-- indexes are considered equal.
+--
+--
+SELECT * FROM show_some_index_exprs_pred(
+	'alteridx_orig',
+	'{"alteridx_orig_expr_excl",
+	  "alteridx_new_expr_excl_pred",
+	  "alteridx_new_expr_excl_pred2"}'::name[]);
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_pred; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_pred2;
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_expr_excl_pred2
+USING INDEX alteridx_new_expr_excl_hash; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_expr_excl_pred2
+USING INDEX alteridx_new_expr_excl;
+--
+--
+-- Checking that all dependencies on columns from index expressions and/or index
+-- predicate are not removed for new constraint index (they always exist both
+-- for standalone and constraint indexes).
+--
+--
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl1
+USING INDEX alteridx_new_expr_excl1;
+
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+--
+--
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+DROP INDEX alteridx_new_expr_excl_wrong;
+ALTER TABLE alteridx_does_not_exist ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+--
+DROP INDEX alteridx_new_expr_excl_pred;
+DROP INDEX alteridx_orig_expr_excl;
+DROP INDEX alteridx_new_expr_excl_hash;
+DROP INDEX alteridx_new_expr_excl; -- failure here
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+
+ALTER TABLE another_alteridx ALTER CONSTRAINT another_alteridx_expr_excl
+USING INDEX another_alteridx_new_expr_excl;
+--
+--
+-- Checking that after simplifying the constants from index expressions some
+-- indexes are considered equal.
+--
+--
+SELECT * FROM show_some_index_exprs_pred(
+	'another_alteridx',
+	'{"another_alteridx_new_expr_excl",
+	  "another_alteridx_new_expr_excl_different",
+	  "another_alteridx_new_expr_excl_different2"}'::name[]);
+ALTER TABLE another_alteridx ALTER CONSTRAINT another_alteridx_new_expr_excl
+USING INDEX another_alteridx_new_expr_excl_different; -- failure here
+ALTER TABLE another_alteridx ALTER CONSTRAINT another_alteridx_new_expr_excl
+USING INDEX another_alteridx_new_expr_excl_different2;
+
+SELECT * FROM show_indexes_from_relation('another_alteridx');
+SELECT * FROM show_constraints_named_like('another_%');
+--
+--
+-- Checking that DDL changes can be rolled back
+--
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+
+BEGIN;
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_pkey
+USING INDEX alteridx_orig_pkey;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+ROLLBACK;
+
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+--
+--
+-- Checking constraints in partitions and partitioned tables
+--
+--
+SELECT * FROM show_alteridx_index_dependencies();
+SELECT * FROM show_alteridx_constraint_dependencies();
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_key
+USING INDEX alteridx_new_partition_key_key; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_id_key
+USING INDEX alteridx_new_partition_key_id_key; -- failure here
+
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_key
+USING INDEX partitioned_new_alteridx_partition_key_key; -- failure here
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_id_key
+USING INDEX partitioned_new_alteridx_partition_key_id_key; -- failure here
+
+ALTER TABLE partitioned_orig_alteridx DETACH PARTITION alteridx_orig;
+
+SELECT * FROM show_alteridx_index_dependencies();
+SELECT * FROM show_alteridx_constraint_dependencies();
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_key
+USING INDEX alteridx_new_partition_key_key;
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_id_key
+USING INDEX alteridx_new_partition_key_id_key;
+
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_key
+USING INDEX partitioned_new_alteridx_partition_key_key; -- failure here
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_id_key
+USING INDEX partitioned_new_alteridx_partition_key_id_key; -- failure here
+--
+--
+-- Dropping replaced indexes
+--
+--
+DROP INDEX alteridx_orig_ir_excl;
+DROP INDEX alteridx_orig_pkey;
+DROP INDEX alteridx_new_uniq_key_no_unique;
+DROP INDEX alteridx_new_uniq_key_incl;
+DROP INDEX alteridx_new_msg_key_coll;
+DROP INDEX alteridx_new_msg_key_ops;
+--
+-- Trying to drop indexes used in constraints after replacement
+--
+DROP INDEX alteridx_new_pkey; -- failure here
+DROP INDEX alteridx_new_uniq_key_back; -- failure here
+DROP INDEX alteridx_orig_msg_key; -- failure here
+DROP INDEX alteridx_new_ir_excl; -- failure here
+DROP INDEX alteridx_new_uniq_key_opt;
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+SELECT * FROM show_constraints_named_like('alteridx_%');
+--
+--
+-- Checking that indexes unavailable for use can't be picked for replacement
+--
+--
+CREATE UNIQUE INDEX alteridx_new_uniq_key_not_live ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_not_valid ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_not_ready ON alteridx_orig(uniq);
+UPDATE pg_index SET indislive=false
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_not_live';
+UPDATE pg_index SET indisvalid=false
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_not_valid';
+UPDATE pg_index SET indisready=false
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_not_ready';
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_not_live; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_not_valid; -- failure here
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_not_ready; -- failure here
+--
+DROP INDEX alteridx_new_uniq_key_not_live;
+DROP INDEX alteridx_new_uniq_key_not_valid;
+DROP INDEX alteridx_new_uniq_key_not_ready;
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+SELECT * FROM show_constraints_named_like('alteridx_%');
+--
+--
+-- Checking that constraints still work
+--
+--
+INSERT INTO alteridx_orig VALUES(1, 0, 1, 'AA', int4range(102, 103), 17); -- failure here
+INSERT INTO alteridx_orig VALUES(0, 1, 1, 'AA', int4range(104, 105), 17); -- failure here
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(100, 107), 17); -- failure here
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(102, 107), 17); -- failure here
+INSERT INTO alteridx_orig VALUES(NULL, 0, 1, 'AA', int4range(102, 107), 17); -- failure here
+INSERT INTO alteridx_orig VALUES(0, NULL, 1, 'AA', int4range(102, 107), 17); -- failure here
+INSERT INTO alteridx_orig VALUES(-1, -1, 1, 'BB', int4range(108, 110), 17);
+--
+SELECT * FROM alteridx_orig;
+--
+--
+DROP FUNCTION show_indexes_from_relation(searched_relname name);
+DROP FUNCTION show_constraints_named_like(searched_conname name);
+DROP FUNCTION show_index_exprs_pred(searched_relname name);
+DROP TABLE alteridx_orig;
+DROP TABLE partitioned_orig_alteridx;
+DROP TABLE another_alteridx;
+DROP TABLE third_alteridx;
diff --git a/src/test/regress/output/constraints.source b/src/test/regress/output/constraints.source
index b727c6150ae..d7fc0fa00aa 100644
--- a/src/test/regress/output/constraints.source
+++ b/src/test/regress/output/constraints.source
@@ -736,3 +736,1882 @@ DROP TABLE constraint_comments_tbl;
 DROP DOMAIN constraint_comments_dom;
 DROP ROLE regress_constraint_comments;
 DROP ROLE regress_constraint_comments_noaccess;
+--
+--
+--
+-- ALTER CONSTRAINT ... USING INDEX
+--
+--
+--
+CREATE FUNCTION show_some_indexes_from_relation(
+	searched_relname name,
+	searched_indnames name[]
+)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	index_relkind "char",
+	index_relispartition boolean,
+	indisunique boolean,
+	indisprimary boolean,
+	indisexclusion boolean,
+	indkey int2vector,
+	indislive boolean,
+	indisvalid boolean,
+	indisready boolean,
+	indoption int2vector,
+	indcollation oidvector,
+	indimmediate boolean,
+	indisreplident boolean,
+	depends_on_table boolean,
+	depends_on_simple_columns boolean,
+	depends_on_constraint boolean,
+	conname name
+)
+AS $$
+	SELECT r.relname, i.relname, i.relkind, i.relispartition, indisunique,
+		   indisprimary, indisexclusion, indkey, indislive, indisvalid,
+		   indisready, indoption, indcollation, indimmediate, indisreplident,
+		   EXISTS(SELECT *
+				  FROM pg_depend
+				  WHERE classid = 'pg_class'::regclass AND
+						objid = indexrelid AND
+						refclassid = 'pg_class'::regclass AND
+						refobjid = indrelid AND
+						refobjsubid = 0),
+		   EXISTS(SELECT *
+				  FROM pg_depend
+				  WHERE classid = 'pg_class'::regclass AND
+						objid = indexrelid AND
+						refclassid = 'pg_class'::regclass AND
+						refobjid = indrelid AND
+						refobjsubid != 0),
+		   EXISTS(SELECT *
+				  FROM pg_depend
+				  WHERE classid = 'pg_class'::regclass AND
+						objid = indexrelid AND
+						refclassid = 'pg_constraint'::regclass AND
+						refobjid = c.oid),
+		   conname
+	FROM pg_index
+	JOIN pg_class i ON indexrelid = i.oid
+	JOIN pg_class r ON indrelid = r.oid
+	LEFT JOIN pg_constraint c ON indexrelid = c.conindid
+	WHERE r.relname = searched_relname AND
+		  (searched_indnames IS NULL OR i.relname = ANY (searched_indnames))
+	ORDER BY indkey, i.relname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_indexes_from_relation(searched_relname name)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	index_relkind "char",
+	index_relispartition boolean,
+	indisunique boolean,
+	indisprimary boolean,
+	indisexclusion boolean,
+	indkey int2vector,
+	indislive boolean,
+	indisvalid boolean,
+	indisready boolean,
+	indoption int2vector,
+	indcollation oidvector,
+	indimmediate boolean,
+	indisreplident boolean,
+	depends_on_table boolean,
+	depends_on_simple_columns boolean,
+	depends_on_constraint boolean,
+	conname name
+)
+AS $$
+	SELECT * FROM show_some_indexes_from_relation(searched_relname, NULL);
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_some_index_exprs_pred(
+	searched_relname name,
+	searched_indnames name[]
+)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	indpred text,
+	indexprs text
+)
+AS $$
+	SELECT
+		r.relname,
+		i.relname,
+		pg_get_expr(indpred, indrelid, true),
+		pg_get_expr(indexprs, indrelid, true)
+	FROM pg_index
+	JOIN pg_class i ON indexrelid = i.oid
+	JOIN pg_class r ON indrelid = r.oid
+	WHERE r.relname = searched_relname AND
+		  (searched_indnames IS NULL OR i.relname = ANY (searched_indnames))
+	ORDER BY indexprs, indpred, indkey, i.relname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_index_exprs_pred(searched_relname name)
+RETURNS TABLE
+(
+	relname name,
+	indname name,
+	indpred text,
+	indexprs text
+)
+AS $$
+	SELECT * FROM show_some_index_exprs_pred(searched_relname, NULL);
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_constraints_named_like(searched_conname name)
+RETURNS TABLE
+(
+	conname name,
+	contype "char",
+	conkey smallint[],
+	condeferrable boolean
+)
+AS $$
+	SELECT conname, contype, conkey, condeferrable
+	FROM pg_constraint
+	WHERE conname LIKE searched_conname
+	ORDER BY conkey, conname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_index_dependencies_on_table_columns
+(
+	searched_indnames name[]
+)
+RETURNS TABLE
+(
+	indname name,
+	indnatts smallint,
+	indnkeyatts smallint,
+	indkey int2vector,
+	attnum smallint,
+	attname name
+)
+AS $$
+	SELECT relname, indnatts, indnkeyatts, indkey, attnum, attname
+	FROM pg_index
+	JOIN pg_class ON indexrelid = pg_class.oid
+	JOIN pg_depend ON indexrelid = objid
+	JOIN pg_attribute ON attrelid = indrelid
+	WHERE relname = ANY (searched_indnames) AND
+		  classid = 'pg_class'::regclass AND
+		  refclassid = 'pg_class'::regclass AND
+		  refobjid = indrelid AND
+		  refobjsubid != 0 AND
+		  refobjsubid = attnum
+	ORDER BY relname, refobjsubid;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_alteridx_index_dependencies()
+RETURNS TABLE
+(
+	indname name,
+	referenced_indname name
+)
+AS $$
+	SELECT c.relname, ref_c.relname
+	FROM
+		pg_index AS i
+		JOIN pg_class AS c ON c.oid = i.indexrelid
+		JOIN pg_depend ON objid = i.indexrelid
+		JOIN pg_index AS ref_i ON refobjid = ref_i.indexrelid
+		JOIN pg_class AS ref_c ON ref_c.oid = ref_i.indexrelid
+	WHERE classid = 'pg_class'::regclass
+		AND refclassid = 'pg_class'::regclass
+		AND c.relname like '%alteridx%'
+	ORDER BY c.relname, ref_c.relname;
+$$ LANGUAGE SQL;
+--
+CREATE FUNCTION show_alteridx_constraint_dependencies()
+RETURNS TABLE
+(
+	conname name,
+	referenced_conname name
+)
+AS $$
+	SELECT con.conname, ref_con.conname
+	FROM
+		pg_constraint AS con
+		JOIN pg_depend ON objid = con.oid
+		JOIN pg_constraint AS ref_con ON refobjid = ref_con.oid
+	WHERE
+		classid = 'pg_constraint'::regclass AND
+		refclassid = 'pg_constraint'::regclass AND
+		con.conname like '%alteridx%'
+	ORDER BY con.conname, ref_con.conname;
+$$ LANGUAGE SQL;
+--
+--
+--
+CREATE TABLE alteridx_orig(
+id int PRIMARY KEY,
+uniq int CONSTRAINT alteridx_orig_uniq_key UNIQUE NOT NULL,
+parity int,
+msg text UNIQUE,
+ir int4range,
+partition_key int,
+EXCLUDE using gist(ir with &&),
+EXCLUDE USING btree((id + uniq) WITH =) WHERE (id > 2),
+EXCLUDE ((id + uniq) WITH =) WHERE (parity < 4),
+CONSTRAINT alteridx_orig_double UNIQUE(id, uniq),
+CONSTRAINT alteridx_id_key_deferrable UNIQUE (id) DEFERRABLE,
+CHECK (parity > -10));
+CREATE TABLE partitioned_orig_alteridx(
+id int,
+uniq int,
+parity int,
+msg text,
+ir int4range,
+partition_key int UNIQUE,
+UNIQUE (partition_key, id))
+PARTITION BY RANGE (partition_key);
+ALTER TABLE partitioned_orig_alteridx ATTACH PARTITION alteridx_orig
+FOR VALUES FROM (0) TO (20);
+--
+--
+CREATE TABLE another_alteridx(
+id int PRIMARY KEY,
+uniq int UNIQUE,
+parity int,
+msg text,
+ir int4range,
+EXCLUDE USING gist(ir with &&),
+EXCLUDE USING btree((id + 4) WITH =));
+--
+--
+CREATE TABLE third_alteridx(
+	id1 int,
+	id2 int,
+	PRIMARY KEY (id1, id2)
+);
+--
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_orig_double
+ alteridx_orig | alteridx_orig_uniq_key             | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_uniq_key
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(10 rows)
+
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+    relname    |              indname               |  indpred   | indexprs  
+---------------+------------------------------------+------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl            | id > 2     | id + uniq
+ alteridx_orig | alteridx_orig_expr_excl1           | parity < 4 | id + uniq
+ alteridx_orig | alteridx_id_key_deferrable         |            | 
+ alteridx_orig | alteridx_orig_pkey                 |            | 
+ alteridx_orig | alteridx_orig_double               |            | 
+ alteridx_orig | alteridx_orig_uniq_key             |            | 
+ alteridx_orig | alteridx_orig_msg_key              |            | 
+ alteridx_orig | alteridx_orig_ir_excl              |            | 
+ alteridx_orig | alteridx_orig_partition_key_key    |            | 
+ alteridx_orig | alteridx_orig_partition_key_id_key |            | 
+(10 rows)
+
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key_deferrable         | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_orig_double               | u       | {1,2}  | f
+ alteridx_orig_uniq_key             | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(11 rows)
+
+SELECT * FROM show_indexes_from_relation('partitioned_orig_alteridx');
+          relname          |                    indname                     | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |                    conname                     
+---------------------------+------------------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------------------
+ partitioned_orig_alteridx | partitioned_orig_alteridx_partition_key_key    | I             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | partitioned_orig_alteridx_partition_key_key
+ partitioned_orig_alteridx | partitioned_orig_alteridx_partition_key_id_key | I             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | partitioned_orig_alteridx_partition_key_id_key
+(2 rows)
+
+SELECT * FROM show_constraints_named_like('partitioned_%');
+                    conname                     | contype | conkey | condeferrable 
+------------------------------------------------+---------+--------+---------------
+ partitioned_orig_alteridx_partition_key_key    | u       | {6}    | f
+ partitioned_orig_alteridx_partition_key_id_key | u       | {6,1}  | f
+(2 rows)
+
+SELECT * FROM show_indexes_from_relation('another_alteridx');
+     relname      |          indname           | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |          conname           
+------------------+----------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+----------------------------
+ another_alteridx | another_alteridx_expr_excl | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | another_alteridx_expr_excl
+ another_alteridx | another_alteridx_pkey      | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_pkey
+ another_alteridx | another_alteridx_uniq_key  | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_uniq_key
+ another_alteridx | another_alteridx_ir_excl   | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_ir_excl
+(4 rows)
+
+SELECT * FROM show_constraints_named_like('another_%');
+          conname           | contype | conkey | condeferrable 
+----------------------------+---------+--------+---------------
+ another_alteridx_expr_excl | x       | {0}    | f
+ another_alteridx_pkey      | p       | {1}    | f
+ another_alteridx_uniq_key  | u       | {2}    | f
+ another_alteridx_ir_excl   | x       | {5}    | f
+(4 rows)
+
+--
+--
+-- Checking that constraints work before index replacement
+--
+--
+INSERT INTO alteridx_orig SELECT n, n, n%2, CHR(62+n) || CHR(63+n),
+int4range(2*n,2*n+1), n from generate_series(1,10) as gs(n);
+INSERT INTO another_alteridx SELECT n, n, n%2, CHR(62+n) || CHR(63+n),
+int4range(2*n,2*n+1) from generate_series(1,10) as gs(n);
+--
+INSERT INTO alteridx_orig VALUES(1, 0, 1, 'AA', int4range(102, 103), 15); -- failure here
+ERROR:  duplicate key value violates unique constraint "alteridx_orig_pkey"
+DETAIL:  Key (id)=(1) already exists.
+INSERT INTO alteridx_orig VALUES(0, 1, 1, 'AA', int4range(104, 105), 15); -- failure here
+ERROR:  duplicate key value violates unique constraint "alteridx_orig_uniq_key"
+DETAIL:  Key (uniq)=(1) already exists.
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(1, 107), 15); -- failure here
+ERROR:  conflicting key value violates exclusion constraint "alteridx_orig_ir_excl"
+DETAIL:  Key (ir)=([1,107)) conflicts with existing key (ir)=([2,3)).
+INSERT INTO alteridx_orig VALUES(NULL, 0, 1, 'AA', int4range(102, 107), 15); -- failure here
+ERROR:  null value in column "id" of relation "alteridx_orig" violates not-null constraint
+DETAIL:  Failing row contains (null, 0, 1, AA, [102,107), 15).
+INSERT INTO alteridx_orig VALUES(0, NULL, 1, 'AA', int4range(102, 107), 15); -- failure here
+ERROR:  null value in column "uniq" of relation "alteridx_orig" violates not-null constraint
+DETAIL:  Failing row contains (0, null, 1, AA, [102,107), 15).
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(102, 107), 15);
+SELECT * FROM alteridx_orig;
+ id | uniq | parity | msg |    ir     | partition_key 
+----+------+--------+-----+-----------+---------------
+  1 |    1 |      1 | ?@  | [2,3)     |             1
+  2 |    2 |      0 | @A  | [4,5)     |             2
+  3 |    3 |      1 | AB  | [6,7)     |             3
+  4 |    4 |      0 | BC  | [8,9)     |             4
+  5 |    5 |      1 | CD  | [10,11)   |             5
+  6 |    6 |      0 | DE  | [12,13)   |             6
+  7 |    7 |      1 | EF  | [14,15)   |             7
+  8 |    8 |      0 | FG  | [16,17)   |             8
+  9 |    9 |      1 | GH  | [18,19)   |             9
+ 10 |   10 |      0 | HI  | [20,21)   |            10
+  0 |    0 |      1 | AA  | [102,107) |            15
+(11 rows)
+
+--
+--
+CREATE UNIQUE INDEX alteridx_new_uniq_key ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_incl ON alteridx_orig(uniq) INCLUDE (ir);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_incl2 ON alteridx_orig(uniq) INCLUDE (parity);
+CREATE UNIQUE INDEX CONCURRENTLY alteridx_new_uniq_key_back ON alteridx_orig USING BTREE(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_pred ON alteridx_orig(uniq) WHERE parity=1;
+CREATE INDEX alteridx_new_uniq_key_no_unique ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_with_msg ON alteridx_orig(uniq, msg);
+CREATE UNIQUE INDEX alteridx_new_msg_key_ops ON alteridx_orig(msg text_pattern_ops);
+CREATE UNIQUE INDEX alteridx_new_pkey ON alteridx_orig(id);
+CREATE INDEX alteridx_new_ir_excl ON alteridx_orig using gist(ir range_ops);
+CREATE INDEX alteridx_new_expr_excl ON alteridx_orig((id + uniq)) WHERE (id > 2);
+CREATE INDEX alteridx_new_expr_excl_hash ON alteridx_orig USING hash((id + uniq)) WHERE (id > 2);
+CREATE INDEX alteridx_new_expr_excl_pred ON alteridx_orig((id + uniq)) WHERE (id > 3);
+CREATE INDEX alteridx_new_expr_excl_pred2 ON alteridx_orig((id + uniq)) WHERE (id > (3 - 1));
+CREATE INDEX alteridx_new_expr_excl_wrong ON alteridx_orig((id - uniq)) WHERE (id > 2);
+CREATE INDEX alteridx_new_expr_excl1 ON alteridx_orig((id + uniq)) WHERE (parity < 4);
+CREATE INDEX another_alteridx_new_expr_excl ON another_alteridx((id + 4));
+CREATE INDEX another_alteridx_new_expr_excl_different ON another_alteridx((id + 2 + 2));
+CREATE INDEX another_alteridx_new_expr_excl_different2 ON another_alteridx((id + (2 + 2)));
+CREATE UNIQUE INDEX alteridx_new_double ON alteridx_orig(id, uniq);
+CREATE INDEX alteridx_new_double_not_unique ON alteridx_orig(id, uniq);
+CREATE UNIQUE INDEX alteridx_id_key ON alteridx_orig(id);
+--
+CREATE UNIQUE INDEX third_alteridx_pkey_new ON third_alteridx(id1, id2);
+CREATE UNIQUE INDEX third_alteridx_pkey_opp ON third_alteridx(id2, id1);
+CREATE UNIQUE INDEX third_alteridx_pkey_single ON third_alteridx(id1);
+CREATE INDEX third_alteridx_pkey_not_unique ON third_alteridx(id1, id2);
+--
+CREATE UNIQUE INDEX alteridx_new_partition_key_key
+ON alteridx_orig(partition_key);
+CREATE UNIQUE INDEX alteridx_new_partition_key_id_key
+ON alteridx_orig(partition_key,id);
+CREATE UNIQUE INDEX partitioned_new_alteridx_partition_key_key
+ON partitioned_orig_alteridx(partition_key);
+CREATE UNIQUE INDEX partitioned_new_alteridx_partition_key_id_key
+ON partitioned_orig_alteridx(partition_key,id);
+--
+CREATE UNIQUE INDEX alteridx_new_uniq_key_opt ON alteridx_orig(uniq);
+UPDATE pg_index SET indoption='1'
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_opt';
+CREATE UNIQUE INDEX alteridx_new_msg_key_coll ON alteridx_orig(msg);
+UPDATE pg_index SET indcollation='12341'
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_msg_key_coll';
+--
+--
+-- Tests for unique constraint --
+--
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_orig_double
+ alteridx_orig | alteridx_new_uniq_key              | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_pred         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_uniq_key             | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_uniq_key
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(33 rows)
+
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+    relname    |              indname               |   indpred    | indexprs  
+---------------+------------------------------------+--------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl            | id > 2       | id + uniq
+ alteridx_orig | alteridx_orig_expr_excl1           | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl             | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl1            | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred        | id > 3       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred2       | id > (3 - 1) | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_hash        | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_wrong       | id > 2       | id - uniq
+ alteridx_orig | alteridx_new_uniq_key_pred         | parity = 1   | 
+ alteridx_orig | alteridx_id_key                    |              | 
+ alteridx_orig | alteridx_id_key_deferrable         |              | 
+ alteridx_orig | alteridx_new_pkey                  |              | 
+ alteridx_orig | alteridx_orig_pkey                 |              | 
+ alteridx_orig | alteridx_new_double                |              | 
+ alteridx_orig | alteridx_new_double_not_unique     |              | 
+ alteridx_orig | alteridx_orig_double               |              | 
+ alteridx_orig | alteridx_new_uniq_key              |              | 
+ alteridx_orig | alteridx_new_uniq_key_back         |              | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    |              | 
+ alteridx_orig | alteridx_new_uniq_key_opt          |              | 
+ alteridx_orig | alteridx_orig_uniq_key             |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        |              | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl         |              | 
+ alteridx_orig | alteridx_new_msg_key_coll          |              | 
+ alteridx_orig | alteridx_new_msg_key_ops           |              | 
+ alteridx_orig | alteridx_orig_msg_key              |              | 
+ alteridx_orig | alteridx_new_ir_excl               |              | 
+ alteridx_orig | alteridx_orig_ir_excl              |              | 
+ alteridx_orig | alteridx_new_partition_key_key     |              | 
+ alteridx_orig | alteridx_orig_partition_key_key    |              | 
+ alteridx_orig | alteridx_new_partition_key_id_key  |              | 
+ alteridx_orig | alteridx_orig_partition_key_id_key |              | 
+(33 rows)
+
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_new_uniq_key_incl",
+	  "alteridx_new_uniq_key_incl2",
+	  "alteridx_orig_expr_excl1",
+	  "alteridx_new_expr_excl1"}'::name[]);
+           indname           | indnatts | indnkeyatts | indkey | attnum | attname 
+-----------------------------+----------+-------------+--------+--------+---------
+ alteridx_new_expr_excl1     |        1 |           1 | 0      |      1 | id
+ alteridx_new_expr_excl1     |        1 |           1 | 0      |      2 | uniq
+ alteridx_new_expr_excl1     |        1 |           1 | 0      |      3 | parity
+ alteridx_new_uniq_key_incl  |        2 |           1 | 2 5    |      2 | uniq
+ alteridx_new_uniq_key_incl  |        2 |           1 | 2 5    |      5 | ir
+ alteridx_new_uniq_key_incl2 |        2 |           1 | 2 3    |      2 | uniq
+ alteridx_new_uniq_key_incl2 |        2 |           1 | 2 3    |      3 | parity
+ alteridx_orig_expr_excl1    |        1 |           1 | 0      |      1 | id
+ alteridx_orig_expr_excl1    |        1 |           1 | 0      |      2 | uniq
+ alteridx_orig_expr_excl1    |        1 |           1 | 0      |      3 | parity
+(10 rows)
+
+SELECT * FROM show_indexes_from_relation('partitioned_orig_alteridx');
+          relname          |                    indname                     | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |                    conname                     
+---------------------------+------------------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------------------
+ partitioned_orig_alteridx | partitioned_new_alteridx_partition_key_key     | I             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ partitioned_orig_alteridx | partitioned_orig_alteridx_partition_key_key    | I             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | partitioned_orig_alteridx_partition_key_key
+ partitioned_orig_alteridx | partitioned_new_alteridx_partition_key_id_key  | I             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ partitioned_orig_alteridx | partitioned_orig_alteridx_partition_key_id_key | I             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | partitioned_orig_alteridx_partition_key_id_key
+(4 rows)
+
+SELECT * FROM show_indexes_from_relation('another_alteridx');
+     relname      |                  indname                  | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |          conname           
+------------------+-------------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+----------------------------
+ another_alteridx | another_alteridx_expr_excl                | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | another_alteridx_expr_excl
+ another_alteridx | another_alteridx_new_expr_excl            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ another_alteridx | another_alteridx_new_expr_excl_different  | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ another_alteridx | another_alteridx_new_expr_excl_different2 | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ another_alteridx | another_alteridx_pkey                     | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_pkey
+ another_alteridx | another_alteridx_uniq_key                 | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_uniq_key
+ another_alteridx | another_alteridx_ir_excl                  | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_ir_excl
+(7 rows)
+
+SELECT * FROM show_index_exprs_pred('another_alteridx');
+     relname      |                  indname                  | indpred |   indexprs   
+------------------+-------------------------------------------+---------+--------------
+ another_alteridx | another_alteridx_new_expr_excl_different  |         | id + 2 + 2
+ another_alteridx | another_alteridx_expr_excl                |         | id + 4
+ another_alteridx | another_alteridx_new_expr_excl            |         | id + 4
+ another_alteridx | another_alteridx_new_expr_excl_different2 |         | id + (2 + 2)
+ another_alteridx | another_alteridx_pkey                     |         | 
+ another_alteridx | another_alteridx_uniq_key                 |         | 
+ another_alteridx | another_alteridx_ir_excl                  |         | 
+(7 rows)
+
+SELECT * FROM show_constraints_named_like('another_%');
+          conname           | contype | conkey | condeferrable 
+----------------------------+---------+--------+---------------
+ another_alteridx_expr_excl | x       | {0}    | f
+ another_alteridx_pkey      | p       | {1}    | f
+ another_alteridx_uniq_key  | u       | {2}    | f
+ another_alteridx_ir_excl   | x       | {5}    | f
+(4 rows)
+
+DROP INDEX alteridx_orig_uniq_key; -- failure here
+ERROR:  cannot drop index alteridx_orig_uniq_key because constraint alteridx_orig_uniq_key on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_orig_uniq_key on table alteridx_orig instead.
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key_deferrable         | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_orig_double               | u       | {1,2}  | f
+ alteridx_orig_uniq_key             | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(11 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_parity_check
+USING INDEX alteridx_new_uniq_key; -- failure here
+ERROR:  constraint "alteridx_orig_parity_check" of relation "alteridx_orig" is not a primary key, unique constraint, exclusion constraint or foreign constraint
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_uniq_key
+USING INDEX alteridx_orig_uniq_key;
+NOTICE:  constraint "alteridx_orig_uniq_key" already uses index "alteridx_orig_uniq_key", skipping
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_uniq_key
+USING INDEX alteridx_new_uniq_key;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_uniq_key" to "alteridx_new_uniq_key"
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key_deferrable         | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_orig_double               | u       | {1,2}  | f
+ alteridx_new_uniq_key              | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(11 rows)
+
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_orig_double
+ alteridx_orig | alteridx_new_uniq_key              | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_pred         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_uniq_key             | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(33 rows)
+
+DROP INDEX alteridx_orig_uniq_key;
+DROP INDEX alteridx_new_uniq_key; -- failure here
+ERROR:  cannot drop index alteridx_new_uniq_key because constraint alteridx_new_uniq_key on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_new_uniq_key on table alteridx_orig instead.
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_orig_double
+ alteridx_orig | alteridx_new_uniq_key              | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_pred         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(32 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key
+USING INDEX alteridx_new_uniq_key_incl2;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_new_uniq_key" to "alteridx_new_uniq_key_incl2"
+--
+--
+-- Checking that all dependencies on simply-referenced columns are correctly
+-- added for old constraint index (included columns may differ in the old and
+-- new constraint index).
+--
+--
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+    relname    |           indname           | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |           conname           
+---------------+-----------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-----------------------------
+ alteridx_orig | alteridx_new_uniq_key_incl2 | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_incl2
+ alteridx_orig | alteridx_new_uniq_key_incl  | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+(2 rows)
+
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+          indname           | indnatts | indnkeyatts | indkey | attnum | attname 
+----------------------------+----------+-------------+--------+--------+---------
+ alteridx_new_uniq_key_incl |        2 |           1 | 2 5    |      2 | uniq
+ alteridx_new_uniq_key_incl |        2 |           1 | 2 5    |      5 | ir
+(2 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_incl2
+USING INDEX alteridx_new_uniq_key_incl;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_new_uniq_key_incl2" to "alteridx_new_uniq_key_incl"
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+    relname    |           indname           | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |          conname           
+---------------+-----------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+----------------------------
+ alteridx_orig | alteridx_new_uniq_key_incl2 | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl  | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_incl
+(2 rows)
+
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_new_uniq_key_incl", "alteridx_new_uniq_key_incl2"}'::name[]);
+           indname           | indnatts | indnkeyatts | indkey | attnum | attname 
+-----------------------------+----------+-------------+--------+--------+---------
+ alteridx_new_uniq_key_incl2 |        2 |           1 | 2 3    |      2 | uniq
+ alteridx_new_uniq_key_incl2 |        2 |           1 | 2 3    |      3 | parity
+(2 rows)
+
+--
+DROP INDEX alteridx_new_uniq_key;
+DROP INDEX alteridx_new_uniq_key_incl; -- failure here
+ERROR:  cannot drop index alteridx_new_uniq_key_incl because constraint alteridx_new_uniq_key_incl on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_new_uniq_key_incl on table alteridx_orig instead.
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key_deferrable         | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_orig_double               | u       | {1,2}  | f
+ alteridx_new_uniq_key_incl         | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(11 rows)
+
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_orig_double
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_pred         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_incl
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(31 rows)
+
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+    relname    |              indname               |   indpred    | indexprs  
+---------------+------------------------------------+--------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl            | id > 2       | id + uniq
+ alteridx_orig | alteridx_orig_expr_excl1           | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl             | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl1            | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred        | id > 3       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred2       | id > (3 - 1) | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_hash        | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_wrong       | id > 2       | id - uniq
+ alteridx_orig | alteridx_new_uniq_key_pred         | parity = 1   | 
+ alteridx_orig | alteridx_id_key                    |              | 
+ alteridx_orig | alteridx_id_key_deferrable         |              | 
+ alteridx_orig | alteridx_new_pkey                  |              | 
+ alteridx_orig | alteridx_orig_pkey                 |              | 
+ alteridx_orig | alteridx_new_double                |              | 
+ alteridx_orig | alteridx_new_double_not_unique     |              | 
+ alteridx_orig | alteridx_orig_double               |              | 
+ alteridx_orig | alteridx_new_uniq_key_back         |              | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    |              | 
+ alteridx_orig | alteridx_new_uniq_key_opt          |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        |              | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl         |              | 
+ alteridx_orig | alteridx_new_msg_key_coll          |              | 
+ alteridx_orig | alteridx_new_msg_key_ops           |              | 
+ alteridx_orig | alteridx_orig_msg_key              |              | 
+ alteridx_orig | alteridx_new_ir_excl               |              | 
+ alteridx_orig | alteridx_orig_ir_excl              |              | 
+ alteridx_orig | alteridx_new_partition_key_key     |              | 
+ alteridx_orig | alteridx_orig_partition_key_key    |              | 
+ alteridx_orig | alteridx_new_partition_key_id_key  |              | 
+ alteridx_orig | alteridx_orig_partition_key_id_key |              | 
+(31 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_incl
+USING INDEX alteridx_new_uniq_key_back;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_new_uniq_key_incl" to "alteridx_new_uniq_key_back"
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key_deferrable         | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_orig_double               | u       | {1,2}  | f
+ alteridx_new_uniq_key_back         | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(11 rows)
+
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_pred; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_pred"
+DETAIL:  Either none or both indexes must have partial index predicates.
+DROP INDEX alteridx_new_uniq_key_pred;
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_no_unique; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_no_unique"
+DETAIL:  Both indexes must be either unique or not.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_msg_key
+USING INDEX alteridx_new_msg_key_coll; -- failure here
+ERROR:  index in constraint "alteridx_orig_msg_key" cannot be replaced by "alteridx_new_msg_key_coll"
+DETAIL:  Indexes must have the same collation.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_msg_key
+USING INDEX alteridx_new_msg_key_ops; -- failure here
+ERROR:  index in constraint "alteridx_orig_msg_key" cannot be replaced by "alteridx_new_msg_key_ops"
+DETAIL:  Indexes must have the same operator class.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_msg_key
+USING INDEX alteridx_new_uniq_key_incl; -- failure here
+ERROR:  index in constraint "alteridx_orig_msg_key" cannot be replaced by "alteridx_new_uniq_key_incl"
+DETAIL:  Indexes must have the same key columns.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX another_alteridx_uniq_key; -- failure here
+ERROR:  "another_alteridx_uniq_key" is not an index for table "alteridx_orig"
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_with_msg; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_with_msg"
+DETAIL:  Indexes must have the same number of key columns.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_opt; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_opt"
+DETAIL:  Indexes must have the same per-column flag bits.
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_double
+USING INDEX alteridx_new_uniq_key_incl; -- failure here
+ERROR:  index in constraint "alteridx_orig_double" cannot be replaced by "alteridx_new_uniq_key_incl"
+DETAIL:  Indexes must have the same number of key columns.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_double
+USING INDEX alteridx_new_double_not_unique; -- failure here
+ERROR:  index in constraint "alteridx_orig_double" cannot be replaced by "alteridx_new_double_not_unique"
+DETAIL:  Both indexes must be either unique or not.
+--
+--
+-- Checking the notification if the replica identity index is no longer used in
+-- the constraint.
+--
+--
+ALTER TABLE alteridx_orig REPLICA IDENTITY USING INDEX alteridx_orig_double;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | t              | f                | f                         | t                     | alteridx_orig_double
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(30 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_double
+USING INDEX alteridx_new_double;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_double" to "alteridx_new_double"
+NOTICE:  replaced index "alteridx_orig_double" is still chosen as replica identity
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_double
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | t              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(30 rows)
+
+--
+--
+-- Checking that deferrable constraints cannot use replica identity index
+--
+--
+ALTER TABLE alteridx_orig REPLICA IDENTITY USING INDEX alteridx_id_key;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | t              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_double
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(30 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_id_key_deferrable
+USING INDEX alteridx_id_key; -- failure here
+ERROR:  index in constraint "alteridx_id_key_deferrable" cannot be replaced by "alteridx_id_key"
+DETAIL:  Deferrable constraint cannot use replica identity index.
+ALTER TABLE alteridx_orig REPLICA IDENTITY DEFAULT;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key_deferrable
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_double
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(30 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_id_key_deferrable
+USING INDEX alteridx_id_key;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_id_key_deferrable" to "alteridx_id_key"
+--
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key                    | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_new_double                | u       | {1,2}  | f
+ alteridx_new_uniq_key_back         | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(11 rows)
+
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_hash        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred        | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_new_expr_excl_wrong       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_double                | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_double
+ alteridx_orig | alteridx_new_double_not_unique     | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_double               | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(30 rows)
+
+ALTER TABLE alteridx_orig DROP CONSTRAINT alteridx_orig_double; -- failure here
+ERROR:  constraint "alteridx_orig_double" of relation "alteridx_orig" does not exist
+ALTER TABLE alteridx_orig DROP CONSTRAINT alteridx_new_double;
+DROP INDEX alteridx_orig_double;
+DROP INDEX alteridx_new_double_not_unique;
+--
+--
+-- Tests for primary key constraint --
+--
+--
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key                    | u       | {1}    | t
+ alteridx_orig_pkey                 | p       | {1}    | f
+ alteridx_new_uniq_key_back         | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(10 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_pkey
+USING INDEX alteridx_new_pkey;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_pkey" to "alteridx_new_pkey"
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key                    | u       | {1}    | t
+ alteridx_new_pkey                  | p       | {1}    | f
+ alteridx_new_uniq_key_back         | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_orig_ir_excl              | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(10 rows)
+
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_pkey
+USING INDEX another_alteridx_uniq_key; -- failure here
+ERROR:  "another_alteridx_uniq_key" is not an index for table "alteridx_orig"
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_pkey
+USING INDEX another_alteridx_pkey; -- failure here
+ERROR:  "another_alteridx_pkey" is not an index for table "alteridx_orig"
+--
+SELECT * FROM show_indexes_from_relation('third_alteridx');
+    relname     |            indname             | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |       conname       
+----------------+--------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+---------------------
+ third_alteridx | third_alteridx_pkey_single     | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ third_alteridx | third_alteridx_pkey            | i             | f                    | t           | t            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | third_alteridx_pkey
+ third_alteridx | third_alteridx_pkey_new        | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ third_alteridx | third_alteridx_pkey_not_unique | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ third_alteridx | third_alteridx_pkey_opp        | i             | f                    | t           | f            | f              | 2 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+(5 rows)
+
+SELECT * FROM show_constraints_named_like('third_%');
+       conname       | contype | conkey | condeferrable 
+---------------------+---------+--------+---------------
+ third_alteridx_pkey | p       | {1,2}  | f
+(1 row)
+
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_opp; -- failure here
+ERROR:  index in constraint "third_alteridx_pkey" cannot be replaced by "third_alteridx_pkey_opp"
+DETAIL:  Indexes must have the same key columns.
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_not_unique; -- failure here
+ERROR:  index in constraint "third_alteridx_pkey" cannot be replaced by "third_alteridx_pkey_not_unique"
+DETAIL:  Both indexes must be either unique or not.
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_single; -- failure here
+ERROR:  index in constraint "third_alteridx_pkey" cannot be replaced by "third_alteridx_pkey_single"
+DETAIL:  Indexes must have the same number of key columns.
+ALTER TABLE third_alteridx ALTER CONSTRAINT third_alteridx_pkey
+USING INDEX third_alteridx_pkey_new;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "third_alteridx_pkey" to "third_alteridx_pkey_new"
+SELECT * FROM show_indexes_from_relation('third_alteridx');
+    relname     |            indname             | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |         conname         
+----------------+--------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-------------------------
+ third_alteridx | third_alteridx_pkey_single     | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ third_alteridx | third_alteridx_pkey            | i             | f                    | t           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ third_alteridx | third_alteridx_pkey_new        | i             | f                    | t           | t            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | third_alteridx_pkey_new
+ third_alteridx | third_alteridx_pkey_not_unique | i             | f                    | f           | f            | f              | 1 2    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ third_alteridx | third_alteridx_pkey_opp        | i             | f                    | t           | f            | f              | 2 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+(5 rows)
+
+SELECT * FROM show_constraints_named_like('third_%');
+         conname         | contype | conkey | condeferrable 
+-------------------------+---------+--------+---------------
+ third_alteridx_pkey_new | p       | {1,2}  | f
+(1 row)
+
+--
+--
+-- Tests for exclusion constraint --
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_ir_excl
+USING INDEX alteridx_new_ir_excl;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_ir_excl" to "alteridx_new_ir_excl"
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname               | contype | conkey | condeferrable 
+------------------------------------+---------+--------+---------------
+ alteridx_orig_expr_excl            | x       | {0}    | f
+ alteridx_orig_expr_excl1           | x       | {0}    | f
+ alteridx_id_key                    | u       | {1}    | t
+ alteridx_new_pkey                  | p       | {1}    | f
+ alteridx_new_uniq_key_back         | u       | {2}    | f
+ alteridx_orig_parity_check         | c       | {3}    | f
+ alteridx_orig_msg_key              | u       | {4}    | f
+ alteridx_new_ir_excl               | x       | {5}    | f
+ alteridx_orig_partition_key_key    | u       | {6}    | f
+ alteridx_orig_partition_key_id_key | u       | {6,1}  | f
+(10 rows)
+
+--
+ALTER TABLE alteridx_orig ADD CONSTRAINT alteridx_new_expr_excl2
+EXCLUDE ((id + uniq) WITH =) WHERE (id > 2);
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl2; -- failure here
+ERROR:  index "alteridx_new_expr_excl2" is already associated with a constraint
+ALTER TABLE alteridx_orig DROP CONSTRAINT alteridx_new_expr_excl2;
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_orig_ir_excl; -- failure here
+ERROR:  index in constraint "alteridx_orig_expr_excl" cannot be replaced by "alteridx_orig_ir_excl"
+DETAIL:  Indexes must have the same access methods.
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+ERROR:  index in constraint "alteridx_orig_expr_excl" cannot be replaced by "alteridx_new_expr_excl_wrong"
+DETAIL:  Indexes must have the same non-column attributes.
+--
+--
+-- Checking that after simplifying the constants from index predicates some
+-- indexes are considered equal.
+--
+--
+SELECT * FROM show_some_index_exprs_pred(
+	'alteridx_orig',
+	'{"alteridx_orig_expr_excl",
+	  "alteridx_new_expr_excl_pred",
+	  "alteridx_new_expr_excl_pred2"}'::name[]);
+    relname    |           indname            |   indpred    | indexprs  
+---------------+------------------------------+--------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl      | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred  | id > 3       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred2 | id > (3 - 1) | id + uniq
+(3 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_pred; -- failure here
+ERROR:  index in constraint "alteridx_orig_expr_excl" cannot be replaced by "alteridx_new_expr_excl_pred"
+DETAIL:  Indexes must have the same partial index predicates.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_pred2;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_expr_excl" to "alteridx_new_expr_excl_pred2"
+--
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_expr_excl_pred2
+USING INDEX alteridx_new_expr_excl_hash; -- failure here
+ERROR:  index in constraint "alteridx_new_expr_excl_pred2" cannot be replaced by "alteridx_new_expr_excl_hash"
+DETAIL:  Indexes must have the same access methods.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_expr_excl_pred2
+USING INDEX alteridx_new_expr_excl;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_new_expr_excl_pred2" to "alteridx_new_expr_excl"
+--
+--
+-- Checking that all dependencies on columns from index expressions and/or index
+-- predicate are not removed for new constraint index (they always exist both
+-- for standalone and constraint indexes).
+--
+--
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+    relname    |         indname          | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |         conname          
+---------------+--------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+--------------------------
+ alteridx_orig | alteridx_new_expr_excl1  | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1 | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_orig_expr_excl1
+(2 rows)
+
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+         indname          | indnatts | indnkeyatts | indkey | attnum | attname 
+--------------------------+----------+-------------+--------+--------+---------
+ alteridx_new_expr_excl1  |        1 |           1 | 0      |      1 | id
+ alteridx_new_expr_excl1  |        1 |           1 | 0      |      2 | uniq
+ alteridx_new_expr_excl1  |        1 |           1 | 0      |      3 | parity
+ alteridx_orig_expr_excl1 |        1 |           1 | 0      |      1 | id
+ alteridx_orig_expr_excl1 |        1 |           1 | 0      |      2 | uniq
+ alteridx_orig_expr_excl1 |        1 |           1 | 0      |      3 | parity
+(6 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl1
+USING INDEX alteridx_new_expr_excl1;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_expr_excl1" to "alteridx_new_expr_excl1"
+SELECT * FROM show_some_indexes_from_relation(
+	'alteridx_orig',
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+    relname    |         indname          | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |         conname         
+---------------+--------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-------------------------
+ alteridx_orig | alteridx_new_expr_excl1  | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_orig_expr_excl1 | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+(2 rows)
+
+SELECT * FROM show_index_dependencies_on_table_columns(
+	'{"alteridx_orig_expr_excl1", "alteridx_new_expr_excl1"}'::name[]);
+         indname          | indnatts | indnkeyatts | indkey | attnum | attname 
+--------------------------+----------+-------------+--------+--------+---------
+ alteridx_new_expr_excl1  |        1 |           1 | 0      |      1 | id
+ alteridx_new_expr_excl1  |        1 |           1 | 0      |      2 | uniq
+ alteridx_new_expr_excl1  |        1 |           1 | 0      |      3 | parity
+ alteridx_orig_expr_excl1 |        1 |           1 | 0      |      1 | id
+ alteridx_orig_expr_excl1 |        1 |           1 | 0      |      2 | uniq
+ alteridx_orig_expr_excl1 |        1 |           1 | 0      |      3 | parity
+(6 rows)
+
+--
+--
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+    relname    |              indname               |   indpred    | indexprs  
+---------------+------------------------------------+--------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl            | id > 2       | id + uniq
+ alteridx_orig | alteridx_orig_expr_excl1           | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl             | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl1            | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred        | id > 3       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred2       | id > (3 - 1) | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_hash        | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_wrong       | id > 2       | id - uniq
+ alteridx_orig | alteridx_id_key                    |              | 
+ alteridx_orig | alteridx_id_key_deferrable         |              | 
+ alteridx_orig | alteridx_new_pkey                  |              | 
+ alteridx_orig | alteridx_orig_pkey                 |              | 
+ alteridx_orig | alteridx_new_uniq_key_back         |              | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    |              | 
+ alteridx_orig | alteridx_new_uniq_key_opt          |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        |              | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl         |              | 
+ alteridx_orig | alteridx_new_msg_key_coll          |              | 
+ alteridx_orig | alteridx_new_msg_key_ops           |              | 
+ alteridx_orig | alteridx_orig_msg_key              |              | 
+ alteridx_orig | alteridx_new_ir_excl               |              | 
+ alteridx_orig | alteridx_orig_ir_excl              |              | 
+ alteridx_orig | alteridx_new_partition_key_key     |              | 
+ alteridx_orig | alteridx_orig_partition_key_key    |              | 
+ alteridx_orig | alteridx_new_partition_key_id_key  |              | 
+ alteridx_orig | alteridx_orig_partition_key_id_key |              | 
+(27 rows)
+
+DROP INDEX alteridx_new_expr_excl_wrong;
+ALTER TABLE alteridx_does_not_exist ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+ERROR:  relation "alteridx_does_not_exist" does not exist
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+ERROR:  constraint "alteridx_orig_expr_excl" of relation "alteridx_orig" does not exist
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_expr_excl
+USING INDEX alteridx_new_expr_excl_wrong; -- failure here
+ERROR:  index "alteridx_new_expr_excl_wrong" for table "alteridx_orig" does not exist
+--
+DROP INDEX alteridx_new_expr_excl_pred;
+DROP INDEX alteridx_orig_expr_excl;
+DROP INDEX alteridx_new_expr_excl_hash;
+DROP INDEX alteridx_new_expr_excl; -- failure here
+ERROR:  cannot drop index alteridx_new_expr_excl because constraint alteridx_new_expr_excl on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_new_expr_excl on table alteridx_orig instead.
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+    relname    |              indname               |   indpred    | indexprs  
+---------------+------------------------------------+--------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl1           | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl             | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl1            | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred2       | id > (3 - 1) | id + uniq
+ alteridx_orig | alteridx_id_key                    |              | 
+ alteridx_orig | alteridx_id_key_deferrable         |              | 
+ alteridx_orig | alteridx_new_pkey                  |              | 
+ alteridx_orig | alteridx_orig_pkey                 |              | 
+ alteridx_orig | alteridx_new_uniq_key_back         |              | 
+ alteridx_orig | alteridx_new_uniq_key_no_unique    |              | 
+ alteridx_orig | alteridx_new_uniq_key_opt          |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        |              | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl         |              | 
+ alteridx_orig | alteridx_new_msg_key_coll          |              | 
+ alteridx_orig | alteridx_new_msg_key_ops           |              | 
+ alteridx_orig | alteridx_orig_msg_key              |              | 
+ alteridx_orig | alteridx_new_ir_excl               |              | 
+ alteridx_orig | alteridx_orig_ir_excl              |              | 
+ alteridx_orig | alteridx_new_partition_key_key     |              | 
+ alteridx_orig | alteridx_orig_partition_key_key    |              | 
+ alteridx_orig | alteridx_new_partition_key_id_key  |              | 
+ alteridx_orig | alteridx_orig_partition_key_id_key |              | 
+(23 rows)
+
+ALTER TABLE another_alteridx ALTER CONSTRAINT another_alteridx_expr_excl
+USING INDEX another_alteridx_new_expr_excl;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "another_alteridx_expr_excl" to "another_alteridx_new_expr_excl"
+--
+--
+-- Checking that after simplifying the constants from index expressions some
+-- indexes are considered equal.
+--
+--
+SELECT * FROM show_some_index_exprs_pred(
+	'another_alteridx',
+	'{"another_alteridx_new_expr_excl",
+	  "another_alteridx_new_expr_excl_different",
+	  "another_alteridx_new_expr_excl_different2"}'::name[]);
+     relname      |                  indname                  | indpred |   indexprs   
+------------------+-------------------------------------------+---------+--------------
+ another_alteridx | another_alteridx_new_expr_excl_different  |         | id + 2 + 2
+ another_alteridx | another_alteridx_new_expr_excl            |         | id + 4
+ another_alteridx | another_alteridx_new_expr_excl_different2 |         | id + (2 + 2)
+(3 rows)
+
+ALTER TABLE another_alteridx ALTER CONSTRAINT another_alteridx_new_expr_excl
+USING INDEX another_alteridx_new_expr_excl_different; -- failure here
+ERROR:  index in constraint "another_alteridx_new_expr_excl" cannot be replaced by "another_alteridx_new_expr_excl_different"
+DETAIL:  Indexes must have the same non-column attributes.
+ALTER TABLE another_alteridx ALTER CONSTRAINT another_alteridx_new_expr_excl
+USING INDEX another_alteridx_new_expr_excl_different2;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "another_alteridx_new_expr_excl" to "another_alteridx_new_expr_excl_different2"
+SELECT * FROM show_indexes_from_relation('another_alteridx');
+     relname      |                  indname                  | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |                  conname                  
+------------------+-------------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-------------------------------------------
+ another_alteridx | another_alteridx_expr_excl                | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ another_alteridx | another_alteridx_new_expr_excl            | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ another_alteridx | another_alteridx_new_expr_excl_different  | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ another_alteridx | another_alteridx_new_expr_excl_different2 | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | another_alteridx_new_expr_excl_different2
+ another_alteridx | another_alteridx_pkey                     | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_pkey
+ another_alteridx | another_alteridx_uniq_key                 | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_uniq_key
+ another_alteridx | another_alteridx_ir_excl                  | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | another_alteridx_ir_excl
+(7 rows)
+
+SELECT * FROM show_constraints_named_like('another_%');
+                  conname                  | contype | conkey | condeferrable 
+-------------------------------------------+---------+--------+---------------
+ another_alteridx_new_expr_excl_different2 | x       | {0}    | f
+ another_alteridx_pkey                     | p       | {1}    | f
+ another_alteridx_uniq_key                 | u       | {2}    | f
+ another_alteridx_ir_excl                  | x       | {5}    | f
+(4 rows)
+
+--
+--
+-- Checking that DDL changes can be rolled back
+--
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_pkey
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(23 rows)
+
+BEGIN;
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_pkey
+USING INDEX alteridx_orig_pkey;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_new_pkey" to "alteridx_orig_pkey"
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_pkey
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(23 rows)
+
+ROLLBACK;
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_pkey
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_key     | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | t                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | t                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | t                | f                         | t                     | alteridx_orig_partition_key_id_key
+(23 rows)
+
+--
+--
+-- Checking constraints in partitions and partitioned tables
+--
+--
+SELECT * FROM show_alteridx_index_dependencies();
+              indname               |               referenced_indname               
+------------------------------------+------------------------------------------------
+ alteridx_new_partition_key_id_key  | partitioned_new_alteridx_partition_key_id_key
+ alteridx_new_partition_key_key     | partitioned_new_alteridx_partition_key_key
+ alteridx_orig_partition_key_id_key | partitioned_orig_alteridx_partition_key_id_key
+ alteridx_orig_partition_key_key    | partitioned_orig_alteridx_partition_key_key
+(4 rows)
+
+SELECT * FROM show_alteridx_constraint_dependencies();
+              conname               |               referenced_conname               
+------------------------------------+------------------------------------------------
+ alteridx_orig_partition_key_id_key | partitioned_orig_alteridx_partition_key_id_key
+ alteridx_orig_partition_key_key    | partitioned_orig_alteridx_partition_key_key
+(2 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_key
+USING INDEX alteridx_new_partition_key_key; -- failure here
+ERROR:  index in constraint "alteridx_orig_partition_key_key" cannot be replaced by "alteridx_new_partition_key_key"
+DETAIL:  One of the indexes is a partition index.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_id_key
+USING INDEX alteridx_new_partition_key_id_key; -- failure here
+ERROR:  index in constraint "alteridx_orig_partition_key_id_key" cannot be replaced by "alteridx_new_partition_key_id_key"
+DETAIL:  One of the indexes is a partition index.
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_key
+USING INDEX partitioned_new_alteridx_partition_key_key; -- failure here
+ERROR:  index in constraint "partitioned_orig_alteridx_partition_key_key" cannot be replaced by "partitioned_new_alteridx_partition_key_key"
+DETAIL:  One of the indexes is a partitioned index.
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_id_key
+USING INDEX partitioned_new_alteridx_partition_key_id_key; -- failure here
+ERROR:  index in constraint "partitioned_orig_alteridx_partition_key_id_key" cannot be replaced by "partitioned_new_alteridx_partition_key_id_key"
+DETAIL:  One of the indexes is a partitioned index.
+ALTER TABLE partitioned_orig_alteridx DETACH PARTITION alteridx_orig;
+SELECT * FROM show_alteridx_index_dependencies();
+ indname | referenced_indname 
+---------+--------------------
+(0 rows)
+
+SELECT * FROM show_alteridx_constraint_dependencies();
+ conname | referenced_conname 
+---------+--------------------
+(0 rows)
+
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname               
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+------------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_pkey
+ alteridx_orig | alteridx_orig_pkey                 | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_no_unique    | i             | f                    | f           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_opt          | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 1         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl         | i             | f                    | t           | f            | f              | 2 5    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_coll          | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 12341        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_msg_key_ops           | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_orig_ir_excl              | i             | f                    | f           | f            | f              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_key     | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_orig_partition_key_key
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_orig_partition_key_id_key
+(23 rows)
+
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_key
+USING INDEX alteridx_new_partition_key_key;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_partition_key_key" to "alteridx_new_partition_key_key"
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_orig_partition_key_id_key
+USING INDEX alteridx_new_partition_key_id_key;
+NOTICE:  ALTER TABLE / ALTER CONSTRAINT USING INDEX will rename constraint "alteridx_orig_partition_key_id_key" to "alteridx_new_partition_key_id_key"
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_key
+USING INDEX partitioned_new_alteridx_partition_key_key; -- failure here
+ERROR:  index in constraint "partitioned_orig_alteridx_partition_key_key" cannot be replaced by "partitioned_new_alteridx_partition_key_key"
+DETAIL:  One of the indexes is a partitioned index.
+ALTER TABLE partitioned_orig_alteridx
+ALTER CONSTRAINT partitioned_orig_alteridx_partition_key_id_key
+USING INDEX partitioned_new_alteridx_partition_key_id_key; -- failure here
+ERROR:  index in constraint "partitioned_orig_alteridx_partition_key_id_key" cannot be replaced by "partitioned_new_alteridx_partition_key_id_key"
+DETAIL:  One of the indexes is a partitioned index.
+--
+--
+-- Dropping replaced indexes
+--
+--
+DROP INDEX alteridx_orig_ir_excl;
+DROP INDEX alteridx_orig_pkey;
+DROP INDEX alteridx_new_uniq_key_no_unique;
+DROP INDEX alteridx_new_uniq_key_incl;
+DROP INDEX alteridx_new_msg_key_coll;
+DROP INDEX alteridx_new_msg_key_ops;
+--
+-- Trying to drop indexes used in constraints after replacement
+--
+DROP INDEX alteridx_new_pkey; -- failure here
+ERROR:  cannot drop index alteridx_new_pkey because constraint alteridx_new_pkey on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_new_pkey on table alteridx_orig instead.
+DROP INDEX alteridx_new_uniq_key_back; -- failure here
+ERROR:  cannot drop index alteridx_new_uniq_key_back because constraint alteridx_new_uniq_key_back on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_new_uniq_key_back on table alteridx_orig instead.
+DROP INDEX alteridx_orig_msg_key; -- failure here
+ERROR:  cannot drop index alteridx_orig_msg_key because constraint alteridx_orig_msg_key on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_orig_msg_key on table alteridx_orig instead.
+DROP INDEX alteridx_new_ir_excl; -- failure here
+ERROR:  cannot drop index alteridx_new_ir_excl because constraint alteridx_new_ir_excl on table alteridx_orig requires it
+HINT:  You can drop constraint alteridx_new_ir_excl on table alteridx_orig instead.
+DROP INDEX alteridx_new_uniq_key_opt;
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname              
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-----------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_pkey
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_partition_key_key
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_partition_key_id_key
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+(16 rows)
+
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname              | contype | conkey | condeferrable 
+-----------------------------------+---------+--------+---------------
+ alteridx_new_expr_excl            | x       | {0}    | f
+ alteridx_new_expr_excl1           | x       | {0}    | f
+ alteridx_id_key                   | u       | {1}    | t
+ alteridx_new_pkey                 | p       | {1}    | f
+ alteridx_new_uniq_key_back        | u       | {2}    | f
+ alteridx_orig_parity_check        | c       | {3}    | f
+ alteridx_orig_msg_key             | u       | {4}    | f
+ alteridx_new_ir_excl              | x       | {5}    | f
+ alteridx_new_partition_key_key    | u       | {6}    | f
+ alteridx_new_partition_key_id_key | u       | {6,1}  | f
+(10 rows)
+
+--
+--
+-- Checking that indexes unavailable for use can't be picked for replacement
+--
+--
+CREATE UNIQUE INDEX alteridx_new_uniq_key_not_live ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_not_valid ON alteridx_orig(uniq);
+CREATE UNIQUE INDEX alteridx_new_uniq_key_not_ready ON alteridx_orig(uniq);
+UPDATE pg_index SET indislive=false
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_not_live';
+UPDATE pg_index SET indisvalid=false
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_not_valid';
+UPDATE pg_index SET indisready=false
+FROM pg_class i WHERE indexrelid = i.oid AND i.relname = 'alteridx_new_uniq_key_not_ready';
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname              
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-----------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_pkey
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_not_live     | i             | f                    | t           | f            | f              | 2      | f         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_not_ready    | i             | f                    | t           | f            | f              | 2      | t         | t          | f          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_not_valid    | i             | f                    | t           | f            | f              | 2      | t         | f          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_partition_key_key
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_partition_key_id_key
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+(19 rows)
+
+--
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_not_live; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_not_live"
+DETAIL:  One of the indexes is being dropped.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_not_valid; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_not_valid"
+DETAIL:  One of the indexes is not valid for queries.
+ALTER TABLE alteridx_orig ALTER CONSTRAINT alteridx_new_uniq_key_back
+USING INDEX alteridx_new_uniq_key_not_ready; -- failure here
+ERROR:  index in constraint "alteridx_new_uniq_key_back" cannot be replaced by "alteridx_new_uniq_key_not_ready"
+DETAIL:  One of the indexes is not ready for inserts.
+--
+DROP INDEX alteridx_new_uniq_key_not_live;
+DROP INDEX alteridx_new_uniq_key_not_valid;
+DROP INDEX alteridx_new_uniq_key_not_ready;
+--
+SELECT * FROM show_indexes_from_relation('alteridx_orig');
+    relname    |              indname               | index_relkind | index_relispartition | indisunique | indisprimary | indisexclusion | indkey | indislive | indisvalid | indisready | indoption | indcollation | indimmediate | indisreplident | depends_on_table | depends_on_simple_columns | depends_on_constraint |              conname              
+---------------+------------------------------------+---------------+----------------------+-------------+--------------+----------------+--------+-----------+------------+------------+-----------+--------------+--------------+----------------+------------------+---------------------------+-----------------------+-----------------------------------
+ alteridx_orig | alteridx_new_expr_excl             | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl
+ alteridx_orig | alteridx_new_expr_excl1            | i             | f                    | f           | f            | t              | 0      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | t                     | alteridx_new_expr_excl1
+ alteridx_orig | alteridx_new_expr_excl_pred2       | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_expr_excl1           | i             | f                    | f           | f            | f              | 0      | t         | t          | t          | 0         | 0            | t            | f              | t                | t                         | f                     | 
+ alteridx_orig | alteridx_id_key                    | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | f            | f              | f                | f                         | t                     | alteridx_id_key
+ alteridx_orig | alteridx_id_key_deferrable         | i             | f                    | t           | f            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_pkey                  | i             | f                    | t           | t            | f              | 1      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_pkey
+ alteridx_orig | alteridx_new_uniq_key_back         | i             | f                    | t           | f            | f              | 2      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_uniq_key_back
+ alteridx_orig | alteridx_new_uniq_key_incl2        | i             | f                    | t           | f            | f              | 2 3    | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     | i             | f                    | t           | f            | f              | 2 4    | t         | t          | t          | 0 0       | 0 100        | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_orig_msg_key              | i             | f                    | t           | f            | f              | 4      | t         | t          | t          | 0         | 100          | t            | f              | f                | f                         | t                     | alteridx_orig_msg_key
+ alteridx_orig | alteridx_new_ir_excl               | i             | f                    | f           | f            | t              | 5      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_ir_excl
+ alteridx_orig | alteridx_new_partition_key_key     | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | f                         | t                     | alteridx_new_partition_key_key
+ alteridx_orig | alteridx_orig_partition_key_key    | i             | f                    | t           | f            | f              | 6      | t         | t          | t          | 0         | 0            | t            | f              | f                | t                         | f                     | 
+ alteridx_orig | alteridx_new_partition_key_id_key  | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | f                         | t                     | alteridx_new_partition_key_id_key
+ alteridx_orig | alteridx_orig_partition_key_id_key | i             | f                    | t           | f            | f              | 6 1    | t         | t          | t          | 0 0       | 0 0          | t            | f              | f                | t                         | f                     | 
+(16 rows)
+
+SELECT * FROM show_index_exprs_pred('alteridx_orig');
+    relname    |              indname               |   indpred    | indexprs  
+---------------+------------------------------------+--------------+-----------
+ alteridx_orig | alteridx_orig_expr_excl1           | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl             | id > 2       | id + uniq
+ alteridx_orig | alteridx_new_expr_excl1            | parity < 4   | id + uniq
+ alteridx_orig | alteridx_new_expr_excl_pred2       | id > (3 - 1) | id + uniq
+ alteridx_orig | alteridx_id_key                    |              | 
+ alteridx_orig | alteridx_id_key_deferrable         |              | 
+ alteridx_orig | alteridx_new_pkey                  |              | 
+ alteridx_orig | alteridx_new_uniq_key_back         |              | 
+ alteridx_orig | alteridx_new_uniq_key_incl2        |              | 
+ alteridx_orig | alteridx_new_uniq_key_with_msg     |              | 
+ alteridx_orig | alteridx_orig_msg_key              |              | 
+ alteridx_orig | alteridx_new_ir_excl               |              | 
+ alteridx_orig | alteridx_new_partition_key_key     |              | 
+ alteridx_orig | alteridx_orig_partition_key_key    |              | 
+ alteridx_orig | alteridx_new_partition_key_id_key  |              | 
+ alteridx_orig | alteridx_orig_partition_key_id_key |              | 
+(16 rows)
+
+SELECT * FROM show_constraints_named_like('alteridx_%');
+              conname              | contype | conkey | condeferrable 
+-----------------------------------+---------+--------+---------------
+ alteridx_new_expr_excl            | x       | {0}    | f
+ alteridx_new_expr_excl1           | x       | {0}    | f
+ alteridx_id_key                   | u       | {1}    | t
+ alteridx_new_pkey                 | p       | {1}    | f
+ alteridx_new_uniq_key_back        | u       | {2}    | f
+ alteridx_orig_parity_check        | c       | {3}    | f
+ alteridx_orig_msg_key             | u       | {4}    | f
+ alteridx_new_ir_excl              | x       | {5}    | f
+ alteridx_new_partition_key_key    | u       | {6}    | f
+ alteridx_new_partition_key_id_key | u       | {6,1}  | f
+(10 rows)
+
+--
+--
+-- Checking that constraints still work
+--
+--
+INSERT INTO alteridx_orig VALUES(1, 0, 1, 'AA', int4range(102, 103), 17); -- failure here
+ERROR:  duplicate key value violates unique constraint "alteridx_orig_msg_key"
+DETAIL:  Key (msg)=(AA) already exists.
+INSERT INTO alteridx_orig VALUES(0, 1, 1, 'AA', int4range(104, 105), 17); -- failure here
+ERROR:  duplicate key value violates unique constraint "alteridx_orig_msg_key"
+DETAIL:  Key (msg)=(AA) already exists.
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(100, 107), 17); -- failure here
+ERROR:  duplicate key value violates unique constraint "alteridx_orig_msg_key"
+DETAIL:  Key (msg)=(AA) already exists.
+INSERT INTO alteridx_orig VALUES(0, 0, 1, 'AA', int4range(102, 107), 17); -- failure here
+ERROR:  duplicate key value violates unique constraint "alteridx_orig_msg_key"
+DETAIL:  Key (msg)=(AA) already exists.
+INSERT INTO alteridx_orig VALUES(NULL, 0, 1, 'AA', int4range(102, 107), 17); -- failure here
+ERROR:  null value in column "id" of relation "alteridx_orig" violates not-null constraint
+DETAIL:  Failing row contains (null, 0, 1, AA, [102,107), 17).
+INSERT INTO alteridx_orig VALUES(0, NULL, 1, 'AA', int4range(102, 107), 17); -- failure here
+ERROR:  null value in column "uniq" of relation "alteridx_orig" violates not-null constraint
+DETAIL:  Failing row contains (0, null, 1, AA, [102,107), 17).
+INSERT INTO alteridx_orig VALUES(-1, -1, 1, 'BB', int4range(108, 110), 17);
+--
+SELECT * FROM alteridx_orig;
+ id | uniq | parity | msg |    ir     | partition_key 
+----+------+--------+-----+-----------+---------------
+  1 |    1 |      1 | ?@  | [2,3)     |             1
+  2 |    2 |      0 | @A  | [4,5)     |             2
+  3 |    3 |      1 | AB  | [6,7)     |             3
+  4 |    4 |      0 | BC  | [8,9)     |             4
+  5 |    5 |      1 | CD  | [10,11)   |             5
+  6 |    6 |      0 | DE  | [12,13)   |             6
+  7 |    7 |      1 | EF  | [14,15)   |             7
+  8 |    8 |      0 | FG  | [16,17)   |             8
+  9 |    9 |      1 | GH  | [18,19)   |             9
+ 10 |   10 |      0 | HI  | [20,21)   |            10
+  0 |    0 |      1 | AA  | [102,107) |            15
+ -1 |   -1 |      1 | BB  | [108,110) |            17
+(12 rows)
+
+--
+--
+DROP FUNCTION show_indexes_from_relation(searched_relname name);
+DROP FUNCTION show_constraints_named_like(searched_conname name);
+DROP FUNCTION show_index_exprs_pred(searched_relname name);
+DROP TABLE alteridx_orig;
+DROP TABLE partitioned_orig_alteridx;
+DROP TABLE another_alteridx;
+DROP TABLE third_alteridx;
#67Michael Paquier
michael@paquier.xyz
In reply to: Konstantin Knizhnik (#63)
Re: Built-in connection pooler

On Thu, Jul 02, 2020 at 06:38:02PM +0300, Konstantin Knizhnik wrote:

Sorry, correct patch is attached.

This needs again a rebase, and has been waiting on author for 6 weeks
now, so I am switching it to RwF.
--
Michael

#68Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Michael Paquier (#67)
1 attachment(s)
Re: Built-in connection pooler

On 17.09.2020 8:07, Michael Paquier wrote:

On Thu, Jul 02, 2020 at 06:38:02PM +0300, Konstantin Knizhnik wrote:

Sorry, correct patch is attached.

This needs again a rebase, and has been waiting on author for 6 weeks
now, so I am switching it to RwF.
--
Michael

Attached is rebased version of the patch.

I wonder what is the correct policy of handling patch status?
This patch was marked as WfA 2020-07-01 because it was not applied any more.
2020-07-02 I have sent rebased version of the patch.
Since this time there was not unanswered questions.
So I actually didn't consider that some extra activity from my side is need.
I have not noticed that patch is not applied any more.
And now it is marked as returned with feedback.

So my questions are:
1. Should I myself change status from WfA to some other?
2. Is there some way to receive notifications that patch is not applied
any more?

I can resubmit this patch to the next commitfest if it is still
interesting for community.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

builtin_connection_proxy-28.patchtext/x-patch; name=builtin_connection_proxy-28.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index 6fbfef2..27aa6cb 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
@@ -94,6 +95,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -286,6 +289,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2c75876..657216e 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -732,6 +732,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000..c63ba26
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 828396d..fb95e61 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index c41ce94..a8b0c40 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -165,6 +165,7 @@ break is not needed in a wider output rendering.
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index bcdbd95..196ca8c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index e4b7483..158fb4d 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -28,6 +28,7 @@
 #include "executor/executor.h"
 #include "executor/tstoreReceiver.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -58,6 +59,9 @@ PerformCursorOpen(ParseState *pstate, DeclareCursorStmt *cstmt, ParamListInfo pa
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	if (cstmt->options & CURSOR_OPT_HOLD)
+		MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 4b18be5..1ad98fd 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -441,6 +442,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 6aab73b..64bf4d1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behavior with connection pooler.
+	 * Unfortunately marking backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), although it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceded by nextval().
+	 * To make regression tests passed, backend is also marker as tainted when it creates
+	 * sequence. Certainly it is just temporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index eab570a..3685270 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -619,6 +619,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index ac986c0..b817c7c 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -193,15 +193,13 @@ pq_init(void)
 {
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 	DoingCopyOut = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -218,6 +216,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 					  NULL, NULL);
@@ -225,6 +228,7 @@ pq_init(void)
 	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -327,7 +331,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 				 const char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -591,6 +595,7 @@ StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index 2d00b4f..8c763c7 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -25,7 +25,8 @@ OBJS = \
 	$(TAS) \
 	atomics.o \
 	pg_sema.o \
-	pg_shmem.o
+	pg_shmem.o \
+	send_sock.o
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..0a90a50
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_SPACE(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index 6fbd1ed..b59cc26 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -690,3 +690,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index bfdf6a8..11dd9c8 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -24,6 +24,7 @@ OBJS = \
 	postmaster.o \
 	startup.o \
 	syslogger.o \
-	walwriter.o
+	walwriter.o \
+	proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000..f05b727
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000..d950a8c
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 959e3b8..d84c749 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -115,6 +115,7 @@
 #include "postmaster/interrupt.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -199,6 +200,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -219,6 +223,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -249,6 +254,18 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -419,7 +436,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -441,6 +457,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -493,6 +510,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -576,6 +595,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -588,6 +649,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1138,6 +1202,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1161,32 +1230,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1255,29 +1328,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1287,6 +1363,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1412,6 +1502,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1649,6 +1741,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1739,8 +1882,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1943,8 +2096,6 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 {
 	int32		len;
 	void	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -2011,6 +2162,18 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, ssl_done, gss_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool ssl_done, bool gss_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2120,7 +2283,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2827,6 +2990,7 @@ pmdie(SIGNAL_ARGS)
 			else if (pmState == PM_STARTUP || pmState == PM_RECOVERY)
 			{
 				/* There should be no clients, so proceed to stop children */
+				StopConnectionProxies(SIGTERM);
 				pmState = PM_STOP_BACKENDS;
 			}
 
@@ -2869,6 +3033,7 @@ pmdie(SIGNAL_ARGS)
 				/* Report that we're about to zap live client sessions */
 				ereport(LOG,
 						(errmsg("aborting any active transactions")));
+				StopConnectionProxies(SIGTERM);
 				pmState = PM_STOP_BACKENDS;
 			}
 
@@ -4144,6 +4309,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4153,8 +4319,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4258,6 +4424,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4971,6 +5139,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -5098,6 +5267,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5652,6 +5834,74 @@ StartAutovacuumWorker(void)
 }
 
 /*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
+/*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
  *
@@ -6255,6 +6505,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->ExtraOptions, ExtraOptions, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6487,6 +6741,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 
 	strlcpy(ExtraOptions, param->ExtraOptions, MAXPGPATH);
 
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
+
 	/*
 	 * We need to restore fd.c's counts of externally-opened FDs; to avoid
 	 * confusion, be sure to do this after restoring max_safe_fds.  (Note:
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000..9df2fc4
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_ltoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 96c2aaa..b06a1d7 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -29,6 +29,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/origin.h"
 #include "replication/slot.h"
@@ -152,6 +153,7 @@ CreateSharedMemoryAndSemaphores(void)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -259,6 +261,7 @@ CreateSharedMemoryAndSemaphores(void)
 	WalSndShmemInit();
 	WalRcvShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 4153cc8..a3824e1 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -79,11 +79,29 @@
 #error "no wait set implementation available"
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
+#endif
+
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -91,6 +109,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -157,9 +177,9 @@ static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action
 #elif defined(WAIT_USE_KQUEUE)
 static void WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -622,6 +642,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -642,23 +663,23 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_KQUEUE)
 	set->kqueue_ret_events = (struct kevent *) data;
-	data += MAXALIGN(sizeof(struct kevent) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 	if (!AcquireExternalFD())
@@ -750,12 +771,11 @@ FreeWaitEventSet(WaitEventSet *set)
 	close(set->kqueue_fd);
 	ReleaseExternalFD();
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -768,7 +788,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -809,9 +829,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -838,8 +860,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -868,15 +902,41 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, 0);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.  The latch may be changed to NULL to disable the latch
  * temporarily, and then set back to a latch later.
@@ -891,13 +951,19 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 	int			old_events;
 #endif
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 #if defined(WAIT_USE_KQUEUE)
 	old_events = event->events;
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -932,9 +998,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, old_events);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -972,6 +1038,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -980,11 +1048,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -992,11 +1059,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -1159,9 +1231,21 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1599,11 +1683,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1626,15 +1711,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1725,17 +1808,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1801,7 +1892,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1842,7 +1933,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index d86566f..129853f 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -813,7 +813,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 88566bd..79dbd82 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -392,6 +392,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyProc->delayChkpt = false;
 	MyProc->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 411cfad..524df57 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4277,6 +4277,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index f592292..b72a487 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 6ab8216..e07332e 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -131,9 +131,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 10;
@@ -149,3 +155,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 596bcb7..feb8669 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -485,6 +485,13 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 StaticAssertDecl(lengthof(ssl_protocol_versions_info) == (PG_TLS1_3_VERSION + 2),
 				 "array length mismatch");
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
 static struct config_enum_entry shared_memory_options[] = {
 #ifndef WIN32
 	{"sysv", SHMEM_TYPE_SYSV, false},
@@ -683,6 +690,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Builtin connection pool"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1360,6 +1369,36 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -2221,6 +2260,53 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
@@ -2279,6 +2365,16 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
 			gettext_noop("Unix-domain sockets use the usual Unix file system "
@@ -4784,6 +4880,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -8357,6 +8463,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 9cb571f..9d73306 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -775,6 +775,19 @@
 #include_if_exists = '...'		# include file only if it exists
 #include = '...'			# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 687509b..0bbc44f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -10972,4 +10972,11 @@
   proname => 'is_normalized', prorettype => 'bool', proargtypes => 'text text',
   prosrc => 'unicode_is_normalized' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 0a23281..5aa5abb 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index b115247..4e0f223 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -54,10 +54,9 @@ extern const PGDLLIMPORT PQcommMethods *PqCommMethods;
  * prototypes for functions in pqcomm.c
  */
 extern WaitEventSet *FeBeWaitSet;
-
-extern int	StreamServerPort(int family, const char *hostName,
-							 unsigned short portNumber, const char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+extern int StreamServerPort(int family, const char *hostName,
+							unsigned short portNumber, const char *unixSocketDir,
+							pgsocket ListenSocket[], int ListenPort[], int MaxListen);
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 72e3352..4e1d72c 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -159,6 +159,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 84bf2c3..f9f64d2 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index 8b6576b..1e0fec7 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -436,6 +436,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -446,6 +447,7 @@ int			pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *except
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index babc87d..edf5871 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -46,6 +47,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -62,6 +68,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool ssl_done, bool gss_done);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000..254d0f0
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 7c74202..d53ccec 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -133,9 +133,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -143,12 +145,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -178,6 +183,8 @@ extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 extern void InitializeLatchWaitSet(void);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 9c9a50a..ffa16bd 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -239,6 +239,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 04431d0..19c2595 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1de91ae..aec3306 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 81089d6..fed76be 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -18,6 +18,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index e72cb2d..183c8de 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -16,6 +16,7 @@ DLSUFFIX = .dll
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index c830627..7f14dcd 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -130,6 +130,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all tablespace-setup
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all tablespace-setup | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000..ebaa257
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 89e1b39..3e3135b 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -162,6 +162,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -273,6 +274,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index 672bb2d..f60d4ba 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#69Daniel Gustafsson
daniel@yesql.se
In reply to: Konstantin Knizhnik (#68)
Re: Built-in connection pooler

On 17 Sep 2020, at 10:40, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

1. Should I myself change status from WfA to some other?

Yes, when you've addressed any issues raised and posted a new version it's very
helpful to the CFM and the community if you update the status.

2. Is there some way to receive notifications that patch is not applied any more?

Not at the moment, but periodically checking the CFBot page for your patches is
a good habit:

http://cfbot.cputube.org/konstantin-knizhnik.html

cheers ./daniel

#70Konstantin Knizhnik
knizhnik@garret.ru
In reply to: Daniel Gustafsson (#69)
1 attachment(s)
Re: Built-in connection pooler

People asked me to resubmit built-in connection pooler patch to commitfest.
Rebased version of connection pooler is attached.

Attachments:

builtin_connection_proxy-30.patchtext/x-patch; charset=UTF-8; name=builtin_connection_proxy-30.patchDownload
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index 6fbfef2b12..27aa6cba8e 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -11,6 +11,7 @@
 
 #include "commands/trigger.h"
 #include "executor/spi.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
@@ -94,6 +95,8 @@ check_primary_key(PG_FUNCTION_ARGS)
 	else
 		tuple = trigdata->tg_newtuple;
 
+	MyProc->is_tainted = true;
+
 	trigger = trigdata->tg_trigger;
 	nargs = trigger->tgnargs;
 	args = trigger->tgargs;
@@ -286,6 +289,8 @@ check_foreign_key(PG_FUNCTION_ARGS)
 		/* internal error */
 		elog(ERROR, "check_foreign_key: cannot process INSERT events");
 
+	MyProc->is_tainted = true;
+
 	/* Have to check tg_trigtuple - tuple being deleted */
 	trigtuple = trigdata->tg_trigtuple;
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index ee4925d6d9..4c862bbae9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -734,6 +734,169 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one connection proxy when session pooling is enabled.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached new connections are not accepted.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched non-tainted backends are never terminated even if there are no active sessions.
+          Backend is considered as tainted if client updates GUCs, creates temporary table or prepared statements.
+          Tainted backend can server only one client.
+        </para>
+        <para>
+          The default value is 10, so up to 10 backends will serve each database,
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxy-port" xreflabel="proxy_port">
+      <term><varname>proxy_port</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>proxy_port</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets the TCP port for the connection pooler.
+          Clients connected to main "port" will be assigned dedicated backends,
+          while client connected to proxy port will be connected to backends through proxy which
+          performs transaction level scheduling. 
+       </para>
+        <para>
+          The default value is 6543.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-proxies" xreflabel="connection_proxies">
+      <term><varname>connection_proxies</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_proxies</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Sets number of connection proxies.
+          Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing).
+          Each proxy launches its own subset of backends.
+          So maximal number of non-tainted backends is  <varname>session_pool_size*connection_proxies*databases*roles</varname>.
+       </para>
+        <para>
+          The default value is 0, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to proxies in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between proxies.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose proxy for new session.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-idle-pool-worker-timeout" xreflabel="idle_pool_worker_timeout">
+      <term><varname>idle_pool_worker_timeout</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>idle_pool_worker_timeout</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+         Terminate an idle connection pool worker after the specified number of milliseconds.
+         The default value is 0, so pool workers are never terminated.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-proxying-gucs" xreflabel="proxying_gucs">
+      <term><varname>proxying_gucs</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>proxying_gucs</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Support setting parameters in connection pooler sessions.
+          When this parameter is switched on, setting session parameters are replaced with setting local (transaction) parameters,
+          which are concatenated with each transaction or stanalone statement. It make it possible not to mark backend as tainted.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-multitenant-proxy" xreflabel="multitenant_proxy">
+      <term><varname>multitenant_proxy</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>multitenant_proxy</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          One pool worker can serve clients with different roles.
+          When this parameter is switched on, each transaction or stanalone statement
+          are prepended with "set role" command.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/connpool.sgml b/doc/src/sgml/connpool.sgml
new file mode 100644
index 0000000000..c63ba2626e
--- /dev/null
+++ b/doc/src/sgml/connpool.sgml
@@ -0,0 +1,182 @@
+<!-- doc/src/sgml/connpool.sgml -->
+
+ <chapter id="connection-pooling">
+  <title>Connection pooling</title>
+
+  <indexterm zone="connection-pooling">
+   <primary>built-in connection pool proxy</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> spawns a separate process (backend) for each client.
+    For large number of clients this model can consume a large number of system
+    resources and lead to significant performance degradation, especially on computers with large
+    number of CPU cores. The reason is high contention between backends for Postgres resources.
+    Also, the size of many Postgres internal data structures are proportional to the number of
+    active backends as well as complexity of algorithms for the data structures.
+  </para>
+
+  <para>
+    This is why many production Postgres installation are using some kind of connection pooling, such as 
+    pgbouncer, J2EE, and odyssey.  Using an external connection pooler requires additional efforts for installation,
+    configuration and maintenance. Also pgbouncer (the most popular connection pooler for Postgres) is
+    single-threaded and so can a be bottleneck on high load systems, so multiple instances of pgbouncer have to be launched.
+  </para>
+
+  <para>
+    Starting with version 12 <productname>PostgreSQL</productname> provides built-in connection pooler.
+    This chapter describes architecture and usage of built-in connection pooler.
+  </para>
+
+ <sect1 id="how-connection-pooler-works">
+  <title>How Built-in Connection Pooler Works</title>
+
+  <para>
+    Built-in connection pooler spawns one or more proxy processes which connect clients and backends.
+    Number of proxy processes is controlled by <varname>connection_proxies</varname> configuration parameter.
+    To avoid substantial changes in Postgres locking mechanism, only transaction level pooling policy is implemented.
+    It means that pooler is able to reschedule backend to another session only when it completed the current transaction.
+  </para>
+
+  <para>
+    As far as each Postgres backend is able to work only with single database, each proxy process maintains
+    hash table of connections pools for each pair of <literal>dbname,role</literal>.
+    Maximal number of backends which can be spawned by connection pool is limited by
+    <varname>session_pool_size</varname> configuration variable.
+    So maximal number of non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+  </para>
+
+  <para>
+    As it was mentioned above separate proxy instance is created for each <literal>dbname,role</literal> pair. Postgres backend is not able to work with more than one database. But it is possible to change current user (role) inside one connection.
+    If <varname>multitenent_proxy</varname> options is switched on, then separate proxy
+    will be create only for each database and current user is explicitly specified for each transaction/standalone statement using <literal>set command</literal> clause.
+    To support this mode you need to grant permissions to all roles to switch between each other.
+  </para>
+
+  <para>
+    To minimize number of changes in Postgres core, built-in connection pooler is not trying to save/restore
+    session context. If session context is modified by client application
+    (changing values of session variables (GUCs), creating temporary tables, preparing statements, advisory locking),
+    then backend executing this session is considered to be <emphasis>tainted</emphasis>.
+    It is now dedicated to this session and can not be rescheduled to other session.
+    Once this session is terminated, backend is terminated as well.
+    Non-tainted backends are not terminated even if there are no more connected sessions.
+    Switching on <varname>proxying_gucs</varname> configuration option allows to set sessions parameters without marking backend as <emphasis>tainted</emphasis>.
+  </para>
+
+  <para>
+    Built-in connection pooler accepts connections on a separate port (<varname>proxy_port</varname> configuration option, default value is 6543).
+    If client is connected to Postgres through standard port (<varname>port</varname> configuration option, default value is 5432), then normal (<emphasis>dedicated</emphasis>) backend is created. It works only
+    with this client and is terminated when client is disconnected. Standard port is also used by proxy itself to
+    launch new worker backends. It means that to enable connection pooler Postgres should be configured
+    to accept local connections (<literal>pg_hba.conf</literal> file).
+  </para>
+
+  <para>
+    If client application is connected through proxy port, then its communication with backend is always
+    performed through proxy. Even if it changes session context and backend becomes <emphasis>tainted</emphasis>,
+    still all traffic between this client and backend comes through proxy.
+  </para>
+
+  <para>
+    Postmaster accepts connections on proxy port and redirects it to one of connection proxies.
+    Right now sessions are bounded to proxy and can not migrate between them.
+    To provide uniform load balancing of proxies, postmaster uses one of three scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    In the last case postmaster will choose proxy with smallest number of already attached clients, with
+    extra weight added to SSL connections (which consume more CPU).
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-configuration">
+  <title>Built-in Connection Pooler Configuration</title>
+
+  <para>
+    There are four main configuration variables controlling connection pooler:
+    <varname>session_pool_size</varname>, <varname>connection_proxies</varname>, <varname>max_sessions</varname> and <varname>proxy_port</varname>.
+    Connection pooler is enabled if all of them are non-zero.
+  </para>
+
+  <para>
+    <varname>connection_proxies</varname> specifies the number of connection proxy processes to be spawned.
+    Default value is zero, so connection pooling is disabled by default.
+  </para>
+
+  <para>
+    <varname>session_pool_size</varname> specifies the maximal number of backends per connection pool. Maximal number of launched non-dedicated backends in pooling mode is limited by
+    <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<literal>#databases</literal>*<literal>#roles</literal>.
+    If the number of backends is too small, the server will not be able to utilize all system resources.
+    But too large value can cause degradation of performance because of large snapshots and lock contention.
+  </para>
+
+  <para>
+    <varname>max_sessions</varname>parameter specifies maximal number of sessions which can be handled by one connection proxy.
+    Actually it affects only size of wait event set and so can be large enough without any  essential negative impact on system resources consumption.
+    Default value is 1000. So maximal number of connections to one database/role is limited by <varname>connection_proxies</varname>*<varname>session_pool_size</varname>*<varname>max_sessions</varname>.
+  </para>
+
+  <para>
+    Connection proxy accepts connections on special port, defined by <varname>proxy_port</varname>.
+    Default value is 6543, but it can be changed to standard Postgres 5432, so by default all connections to the databases will be pooled.
+    It is still necessary to have a port for direct connections to the database (dedicated backends).
+    It is needed for connection pooler itself to launch worker backends.
+  </para>
+
+  <para>
+    Postmaster scatters sessions between proxies using one of three available scheduling policies:
+    <literal>round-robin</literal>, <literal>random</literal> and <literal>load-balancing</literal>.
+    Policy can be set using <varname>session_schedule</varname> configuration variable. Default policy is
+    <literal>round-robin</literal> which cause cyclic distribution of sessions between proxies.
+    It should work well in case of more or less uniform workload.
+    The smartest policy is <literal>load-balancing</literal> which tries to choose least loaded proxy
+    based on the available statistic. It is possible to monitor proxies state using <function>pg_pooler_state()</function> function, which returns information about number of clients, backends and pools for each proxy as well
+    as some statistic information about number of proceeded transactions and amount of data
+    sent from client to backends (<varname>rx_bytes</varname>) and from backends to clients (<varname>tx_bytes</varname>).
+  </para>
+
+  <para>
+    Because pooled backends are not terminated on client exit, it will not
+    be possible to drop database to which they are connected.  It can be achieved without server restart using <varname>restart_pooler_on_reload</varname> variable. Setting this variable to <literal>true</literal> cause shutdown of all pooled backends after execution of <function>pg_reload_conf()</function> function. Then it will be possible to drop database. Alternatively you can specify <varname>idle-pool-worker-timeout</varname> which
+    forces termination of workers not used for the specified time. If database is not accessed for a long time, then all pool workers are terminated.
+  </para>
+
+ </sect1>
+
+ <sect1 id="connection-pooler-constraints">
+  <title>Built-in Connection Pooler Pros and Cons</title>
+
+  <para>
+    Unlike pgbouncer and other external connection poolers, the built-in connection pooler doesn't require installation and configuration of some other components.
+    It also does not introduce any limitations for clients: existing clients can work through proxy and don't notice any difference.
+    If client application requires session context, then it will be served by dedicated backend. Such connection will not participate in
+    connection pooling but it will correctly work. This is the main difference with pgbouncer,
+    which may cause incorrect behavior of client application in case of using other session level pooling policy.
+    And if application is not changing session context, then it can be implicitly pooled, reducing number of active backends.
+  </para>
+
+  <para>
+    The main limitation of current built-in connection pooler implementation is that it is not able to save/resume session context.
+    Although it is not so difficult to do, but it requires more changes in Postgres core. Developers of client applications have
+    the choice to either avoid using session-specific operations, or not use built-in pooling. For example, using prepared statements can improve speed of simple queries
+    up to two times. But prepared statements can not be handled by pooled backend, so if all clients are using prepared statements, then there will be no connection pooling
+    even if connection pooling is enabled.
+  </para>
+
+  <para>
+    Redirecting connections through the connection proxy definitely has a negative effect on total system performance, especially latency.
+    The overhead of the connection proxy depends on many factors, such as characteristics of external and internal networks, complexity of queries and size of returned result set.
+    With a small number of connections (10), pgbench benchmark in select-only mode shows almost two times worse performance for local connections through connection pooler compared with direct local connections. For much larger number of connections (when pooling is actually required), pooling mode outperforms direct connection mode.
+  </para>
+
+  <para>
+    Another obvious limitation of transaction level pooling is that long living transaction can cause starvation of
+    other clients. It greatly depends on application design. If application opens database transaction and then waits for user input or some other external event, then backend can be in <emphasis>idle-in-transaction</emphasis>
+    state for long enough time. An <emphasis>idle-in-transaction</emphasis> backend can not be rescheduled for another session.
+    The obvious recommendation is to avoid long-living transaction and setup <varname>idle_in_transaction_session_timeout</varname> to implicitly abort such transactions.
+  </para>
+
+ </sect1>
+
+ </chapter>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index db1d369743..7911e0029b 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -29,6 +29,7 @@
 <!ENTITY syntax     SYSTEM "syntax.sgml">
 <!ENTITY textsearch SYSTEM "textsearch.sgml">
 <!ENTITY typeconv   SYSTEM "typeconv.sgml">
+<!ENTITY connpool   SYSTEM "connpool.sgml">
 
 <!-- administrator's guide -->
 <!ENTITY backup        SYSTEM "backup.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 730d5fdc34..13db3ff32e 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -166,6 +166,7 @@ break is not needed in a wider output rendering.
   &maintenance;
   &backup;
   &high-availability;
+  &connpool;
   &monitoring;
   &diskusage;
   &wal;
diff --git a/src/Makefile b/src/Makefile
index 79e274a476..da1c8b548c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -23,6 +23,7 @@ SUBDIRS = \
 	interfaces \
 	backend/replication/libpqwalreceiver \
 	backend/replication/pgoutput \
+	backend/postmaster/libpqconn \
 	fe_utils \
 	bin \
 	pl \
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 6f2397bd36..f80577e3d8 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -29,6 +29,7 @@
 #include "executor/tstoreReceiver.h"
 #include "miscadmin.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "utils/memutils.h"
@@ -59,6 +60,9 @@ PerformCursorOpen(ParseState *pstate, DeclareCursorStmt *cstmt, ParamListInfo pa
 				(errcode(ERRCODE_INVALID_CURSOR_NAME),
 				 errmsg("invalid cursor name: must not be empty")));
 
+	if (cstmt->options & CURSOR_OPT_HOLD)
+		MyProc->is_tainted = true; /* cursors are not compatible with builtin connection pooler */
+
 	/*
 	 * If this is a non-holdable cursor, we require that this statement has
 	 * been executed inside a transaction block (or else, it would have no
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index f767751c71..f73d7563b6 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,6 +30,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
@@ -439,6 +440,7 @@ StorePreparedStatement(const char *stmt_name,
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
+	MyProc->is_tainted = true;
 
 	/* Shouldn't get a duplicate entry */
 	if (found)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0415df9ccb..f0ee21e8f4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -251,6 +251,19 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	heap_freetuple(tuple);
 	table_close(rel, RowExclusiveLock);
 
+	/*
+	 * TODO:
+	 * Using currval() may cause incorrect behavior with connection pooler.
+	 * Unfortunately marking backend as tainted in currval() is too late.
+	 * This is why it is done in nextval(), although it is not strictly required, because
+	 * nextval() may be not followed by currval().
+	 * But currval() may be not preceded by nextval().
+	 * To make regression tests passed, backend is also marker as tainted when it creates
+	 * sequence. Certainly it is just temporary workaround, because sequence may be created
+	 * in one backend and accessed in another.
+	 */
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	return address;
 }
 
@@ -564,6 +577,8 @@ nextval(PG_FUNCTION_ARGS)
 	 */
 	relid = RangeVarGetRelid(sequence, NoLock, false);
 
+	MyProc->is_tainted = true; /* in case of using currval() */
+
 	PG_RETURN_INT64(nextval_internal(relid, true));
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9b2800bf5e..3186615b73 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -623,6 +623,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 				(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
 				 errmsg("ON COMMIT can only be used on temporary tables")));
 
+	if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
+		&& stmt->oncommit != ONCOMMIT_DROP)
+		MyProc->is_tainted = true;
+
 	if (stmt->partspec != NULL)
 	{
 		if (relkind != RELKIND_RELATION)
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 4c7b1e7bfd..51b82aa96c 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -177,14 +177,12 @@ pq_init(void)
 
 	/* initialize state variables */
 	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
+	if (!PqSendBuffer)
+		PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
 	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
 	PqCommBusy = false;
 	PqCommReadingMsg = false;
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
-
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
 	 * nonblocking mode and use latches to implement blocking semantics if
@@ -201,6 +199,11 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	if (FeBeWaitSet)
+		FreeWaitEventSet(FeBeWaitSet);
+	else
+		on_proc_exit(socket_close, 0);
+
 	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
 	socket_pos = AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE,
 								   MyProcPort->sock, NULL, NULL);
@@ -217,6 +220,7 @@ pq_init(void)
 	Assert(latch_pos == FeBeWaitSetLatchPos);
 }
 
+
 /* --------------------------------
  *		socket_comm_reset - reset libpq during error recovery
  *
@@ -314,7 +318,7 @@ socket_close(int code, Datum arg)
 int
 StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 				 const char *unixSocketDir,
-				 pgsocket ListenSocket[], int MaxListen)
+				 pgsocket ListenSocket[], int ListenPort[], int MaxListen)
 {
 	pgsocket	fd;
 	int			err;
@@ -580,6 +584,7 @@ StreamServerPort(int family, const char *hostName, unsigned short portNumber,
 							familyDesc, addrDesc, (int) portNumber)));
 
 		ListenSocket[listen_index] = fd;
+		ListenPort[listen_index] = portNumber;
 		added++;
 	}
 
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index 2d00b4f05a..8c763c719d 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -25,7 +25,8 @@ OBJS = \
 	$(TAS) \
 	atomics.o \
 	pg_sema.o \
-	pg_shmem.o
+	pg_shmem.o \
+	send_sock.o
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000000..0a90a50fd4
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, (char*)&dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+	char buf[CMSG_SPACE(sizeof(sock))];
+	memset(buf, '\0', sizeof(buf));
+
+	/* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+	io.iov_base = "";
+	io.iov_len = 1;
+
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+	msg.msg_control = buf;
+	msg.msg_controllen = sizeof(buf);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_RIGHTS;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+	memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+	msg.msg_controllen = cmsg->cmsg_len;
+
+	while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, (char*)&src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+	return s;
+#else
+	pgsocket	sock;
+	char		c_buffer[CMSG_SPACE(sizeof(sock))];
+	char		m_buffer[1];
+	struct msghdr msg = {0};
+	struct iovec io;
+	struct cmsghdr * cmsg;
+
+	io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+	msg.msg_iov = &io;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = c_buffer;
+	msg.msg_controllen = sizeof(c_buffer);
+
+	while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+	{
+		elog(WARNING, "Failed to transfer socket");
+		return PGINVALID_SOCKET;
+	}
+
+	memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+	pg_set_noblock(sock);
+
+	return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index a8012c2798..bc43300f93 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -698,3 +698,65 @@ pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, c
 		memcpy(writefds, &outwritefds, sizeof(fd_set));
 	return nummatches;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+	union {
+	   struct sockaddr_in inaddr;
+	   struct sockaddr addr;
+	} a;
+	SOCKET listener;
+	int e;
+	socklen_t addrlen = sizeof(a.inaddr);
+	DWORD flags = 0;
+	int reuse = 1;
+
+	socks[0] = socks[1] = -1;
+
+	listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (listener == -1)
+		return SOCKET_ERROR;
+
+	memset(&a, 0, sizeof(a));
+	a.inaddr.sin_family = AF_INET;
+	a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	a.inaddr.sin_port = 0;
+
+	for (;;) {
+		if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+			   (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+			break;
+		if	(bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		memset(&a, 0, sizeof(a));
+		if	(getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+			break;
+		a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+		a.inaddr.sin_family = AF_INET;
+
+		if (listen(listener, 1) == SOCKET_ERROR)
+			break;
+
+		socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+		if (socks[0] == -1)
+			break;
+		if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+			break;
+
+		socks[1] = accept(listener, NULL, NULL);
+		if (socks[1] == -1)
+			break;
+
+		closesocket(listener);
+		return 0;
+	}
+
+	e = WSAGetLastError();
+	closesocket(listener);
+	closesocket(socks[0]);
+	closesocket(socks[1]);
+	WSASetLastError(e);
+	socks[0] = socks[1] = -1;
+	return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index bfdf6a833d..11dd9c8733 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -24,6 +24,7 @@ OBJS = \
 	postmaster.o \
 	startup.o \
 	syslogger.o \
-	walwriter.o
+	walwriter.o \
+	proxy.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/libpqconn/Makefile b/src/backend/postmaster/libpqconn/Makefile
new file mode 100644
index 0000000000..f05b72758e
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/Makefile
@@ -0,0 +1,35 @@
+#-------------------------------------------------------------------------
+#
+# Makefile--
+#    Makefile for src/backend/postmaster/libpqconn
+#
+# IDENTIFICATION
+#    src/backend/postmaster/libpqconn/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/backend/postmaster/libpqconn
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS = libpqconn.o $(WIN32RES)
+SHLIB_LINK_INTERNAL = $(libpq)
+SHLIB_LINK = $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+PGFILEDESC = "libpqconn - open libpq connection"
+NAME = libpqconn
+
+all: all-shared-lib
+
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean maintainer-clean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/backend/postmaster/libpqconn/libpqconn.c b/src/backend/postmaster/libpqconn/libpqconn.c
new file mode 100644
index 0000000000..d950a8c281
--- /dev/null
+++ b/src/backend/postmaster/libpqconn/libpqconn.c
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * libpqconn.c
+ *
+ * This file provides a way to establish connection to postgres instanc from backend.
+ *
+ * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/postmaster/libpqconn/libpqconn.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <sys/time.h>
+
+#include "fmgr.h"
+#include "libpq-fe.h"
+#include "postmaster/postmaster.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static void*
+libpq_connectdb(char const* keywords[], char const* values[], char** error)
+{
+	PGconn* conn = PQconnectdbParams(keywords, values, false);
+	if (conn && PQstatus(conn) != CONNECTION_OK)
+	{
+		ereport(WARNING,
+				(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+				 errmsg("could not setup local connect to server"),
+				 errdetail_internal("%s", pchomp(PQerrorMessage(conn)))));
+		*error = strdup(PQerrorMessage(conn));
+		PQfinish(conn);
+		return NULL;
+	}
+	*error = NULL;
+	return conn;
+}
+
+void _PG_init(void)
+{
+	LibpqConnectdbParams = libpq_connectdb;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index ef0be4ca38..a39a85a739 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -114,6 +114,7 @@
 #include "postmaster/interrupt.h"
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "postmaster/syslogger.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
@@ -198,6 +199,9 @@ BackgroundWorker *MyBgworkerEntry = NULL;
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
 
+/* The socket number we are listening for pooled connections on */
+int			ProxyPortNumber;
+
 /* The directory names for Unix socket(s) */
 char	   *Unix_socket_directories;
 
@@ -218,6 +222,7 @@ int			ReservedBackends;
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
 static pgsocket ListenSocket[MAXLISTEN];
+static int      ListenPort[MAXLISTEN];
 
 /*
  * These globals control the behavior of the postmaster in case some
@@ -244,6 +249,18 @@ char	   *bonjour_name;
 bool		restart_after_crash = true;
 bool		remove_temp_files_after_crash = true;
 
+typedef struct ConnectionProxy
+{
+	int pid;
+	pgsocket socks[2];
+} ConnectionProxy;
+
+ConnectionProxy* ConnectionProxies;
+static bool ConnectionProxiesStarted;
+static int CurrentConnectionProxy; /* index used for round-robin distribution of connections between proxies */
+
+void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** error);
+
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -414,7 +431,6 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
 static int	ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
@@ -436,6 +452,7 @@ static pid_t StartChildProcess(AuxProcType type);
 static void StartAutovacuumWorker(void);
 static void MaybeStartWalReceiver(void);
 static void InitPostmasterDeathWatchHandle(void);
+static void StartProxyWorker(int id);
 
 /*
  * Archiver is allowed to start up at the current postmaster state?
@@ -489,6 +506,8 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket proxySocket;
+	int         proxyId;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -572,6 +591,48 @@ int			postmaster_alive_fds[2] = {-1, -1};
 HANDLE		PostmasterHandle;
 #endif
 
+static void
+StartConnectionProxies(void)
+{
+	if (SessionPoolSize > 0 && ConnectionProxiesNumber > 0 && !ConnectionProxiesStarted)
+	{
+		int i;
+		if (ConnectionProxies == NULL)
+		{
+			ConnectionProxies = malloc(sizeof(ConnectionProxy)*ConnectionProxiesNumber);
+			for (i = 0; i < ConnectionProxiesNumber; i++)
+			{
+				if (socketpair(AF_UNIX, SOCK_STREAM, 0, ConnectionProxies[i].socks) < 0)
+					ereport(FATAL,
+							(errcode_for_file_access(),
+							 errmsg_internal("could not create socket pair for launching sessions: %m")));
+			}
+		}
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			StartProxyWorker(i);
+		}
+		ConnectionProxiesStarted = true;
+	}
+}
+
+/*
+ * Send signal to connection proxies
+ */
+static void
+StopConnectionProxies(int signal)
+{
+	if (ConnectionProxiesStarted)
+	{
+		int i;
+		for (i = 0; i < ConnectionProxiesNumber; i++)
+		{
+			signal_child(ConnectionProxies[i].pid, signal);
+		}
+		ConnectionProxiesStarted = false;
+	}
+}
+
 /*
  * Postmaster main entry point
  */
@@ -584,6 +645,9 @@ PostmasterMain(int argc, char *argv[])
 	bool		listen_addr_saved = false;
 	int			i;
 	char	   *output_config_variable = NULL;
+	bool        contains_localhost = false;
+	int         ports[2];
+	int         n_ports = 0;
 
 	InitProcessGlobals();
 
@@ -1131,6 +1195,11 @@ PostmasterMain(int argc, char *argv[])
 
 	on_proc_exit(CloseServerPorts, 0);
 
+	/* Listen on proxy socket only if session pooling is enabled */
+	if (ProxyPortNumber > 0 && ConnectionProxiesNumber > 0 && SessionPoolSize > 0)
+		ports[n_ports++] = ProxyPortNumber;
+	ports[n_ports++] = PostPortNumber;
+
 	if (ListenAddresses)
 	{
 		char	   *rawstring;
@@ -1154,32 +1223,36 @@ PostmasterMain(int argc, char *argv[])
 		foreach(l, elemlist)
 		{
 			char	   *curhost = (char *) lfirst(l);
-
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i < n_ports; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				int port = ports[i];
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) port,
+											  NULL,
+											  ListenSocket, ListenPort, MAXLISTEN);
+				contains_localhost |= strcmp(curhost, "localhost") == 0;
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
 
 		if (!success && elemlist != NIL)
@@ -1249,29 +1322,32 @@ PostmasterMain(int argc, char *argv[])
 					 errmsg("invalid list syntax in parameter \"%s\"",
 							"unix_socket_directories")));
 		}
-
+		contains_localhost = true;
 		foreach(l, elemlist)
 		{
 			char	   *socketdir = (char *) lfirst(l);
+			for (i = 0; i < n_ports; i++)
+			{
+				int port = ports[i];
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) port,
+										  socketdir,
+										  ListenSocket, ListenPort, MAXLISTEN);
 
-			if (status == STATUS_OK)
-			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1281,6 +1357,20 @@ PostmasterMain(int argc, char *argv[])
 	}
 #endif
 
+	if (!contains_localhost && ProxyPortNumber > 0)
+	{
+		/* we need to accept local connections from proxy */
+		status = StreamServerPort(AF_UNSPEC, "localhost",
+								  (unsigned short) PostPortNumber,
+								  NULL,
+								  ListenSocket, ListenPort, MAXLISTEN);
+		if (status != STATUS_OK)
+		{
+			ereport(WARNING,
+					(errmsg("could not create listen socket for localhost")));
+		}
+	}
+
 	/*
 	 * check that we have some socket to listen on
 	 */
@@ -1406,6 +1496,8 @@ PostmasterMain(int argc, char *argv[])
 	/* Some workers may be scheduled to start now */
 	maybe_start_bgworkers();
 
+	StartConnectionProxies();
+
 	status = ServerLoop();
 
 	/*
@@ -1644,6 +1736,57 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+/**
+ * This function tries to estimate workload of proxy.
+ * We have a lot of information about proxy state in ProxyState array:
+ * total number of clients, SSL clients, backends, traffic, number of transactions,...
+ * So in principle it is possible to implement much more sophisticated evaluation function,
+ * but right now we take in account only number of clients and SSL connections (which requires much more CPU)
+ */
+static uint64
+GetConnectionProxyWorkload(int id)
+{
+	return ProxyState[id].n_clients + ProxyState[id].n_ssl_clients*3;
+}
+
+/**
+ * Choose connection pool for this session.
+ * Right now sessions can not be moved between pools (in principle it is not so difficult to implement it),
+ * so to support order balancing we should do some smart work here.
+ */
+static ConnectionProxy*
+SelectConnectionProxy(void)
+{
+	int i;
+	uint64 min_workload;
+	int least_loaded_proxy;
+
+	switch (SessionSchedule)
+	{
+	  case SESSION_SCHED_ROUND_ROBIN:
+		return &ConnectionProxies[CurrentConnectionProxy++ % ConnectionProxiesNumber];
+	  case SESSION_SCHED_RANDOM:
+		return &ConnectionProxies[random() % ConnectionProxiesNumber];
+	  case SESSION_SCHED_LOAD_BALANCING:
+		min_workload = GetConnectionProxyWorkload(0);
+		least_loaded_proxy = 0;
+		for (i = 1; i < ConnectionProxiesNumber; i++)
+		{
+			int workload = GetConnectionProxyWorkload(i);
+			if (workload < min_workload)
+			{
+				min_workload = workload;
+				least_loaded_proxy = i;
+			}
+		}
+		return &ConnectionProxies[least_loaded_proxy];
+	  default:
+		Assert(false);
+	}
+	return NULL;
+}
+
+
 /*
  * Main idle loop of postmaster
  *
@@ -1734,8 +1877,18 @@ ServerLoop(void)
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
+						if (ConnectionProxies && ListenPort[i] == ProxyPortNumber)
+						{
+							ConnectionProxy* proxy = SelectConnectionProxy();
+							if (pg_send_sock(proxy->socks[0], port->sock, proxy->pid) < 0)
+							{
+								elog(LOG, "could not send socket to connection pool: %m");
+							}
+						}
+						else
+						{
+							BackendStartup(port, NULL);
+						}
 						/*
 						 * We no longer need the open socket or port structure
 						 * in this process
@@ -1938,8 +2091,6 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 {
 	int32		len;
 	char	   *buf;
-	ProtocolVersion proto;
-	MemoryContext oldcontext;
 
 	pq_startmsgread();
 
@@ -2003,6 +2154,18 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool gss_done)
 	}
 	pq_endmsgread();
 
+	return ParseStartupPacket(port, TopMemoryContext, buf, len, ssl_done, gss_done);
+}
+
+int
+ParseStartupPacket(Port *port, MemoryContext memctx, void* buf, int len, bool ssl_done, bool gss_done)
+{
+	ProtocolVersion proto;
+	MemoryContext oldcontext;
+
+	am_walsender = false;
+	am_db_walsender = false;
+
 	/*
 	 * The first field is either a protocol version number or a special
 	 * request code.
@@ -2113,7 +2276,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	/* Handle protocol version 3 startup packet */
 	{
@@ -2129,7 +2292,7 @@ retry1:
 
 		while (offset < len)
 		{
-			char	   *nameptr = buf + offset;
+			char	   *nameptr = (char*)buf + offset;
 			int32		valoffset;
 			char	   *valptr;
 
@@ -2138,7 +2301,7 @@ retry1:
 			valoffset = offset + strlen(nameptr) + 1;
 			if (valoffset >= len)
 				break;			/* missing value, will complain below */
-			valptr = buf + valoffset;
+			valptr = (char*)buf + valoffset;
 
 			if (strcmp(nameptr, "database") == 0)
 				port->database_name = pstrdup(valptr);
@@ -2781,6 +2944,7 @@ pmdie(SIGNAL_ARGS)
 			else if (pmState == PM_STARTUP || pmState == PM_RECOVERY)
 			{
 				/* There should be no clients, so proceed to stop children */
+				StopConnectionProxies(SIGTERM);
 				pmState = PM_STOP_BACKENDS;
 			}
 
@@ -2823,6 +2987,7 @@ pmdie(SIGNAL_ARGS)
 				/* Report that we're about to zap live client sessions */
 				ereport(LOG,
 						(errmsg("aborting any active transactions")));
+				StopConnectionProxies(SIGTERM);
 				pmState = PM_STOP_BACKENDS;
 			}
 
@@ -4111,6 +4276,7 @@ TerminateChildren(int signal)
 		signal_child(PgArchPID, signal);
 	if (PgStatPID != 0)
 		signal_child(PgStatPID, signal);
+	StopConnectionProxies(signal);
 }
 
 /*
@@ -4120,8 +4286,8 @@ TerminateChildren(int signal)
  *
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
-static int
-BackendStartup(Port *port)
+int
+BackendStartup(Port *port, int* backend_pid)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
@@ -4225,6 +4391,8 @@ BackendStartup(Port *port)
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
 #endif
+	if (backend_pid)
+		*backend_pid = pid;
 
 	return STATUS_OK;
 }
@@ -4894,6 +5062,7 @@ SubPostmasterMain(int argc, char *argv[])
 	if (strcmp(argv[1], "--forkbackend") == 0 ||
 		strcmp(argv[1], "--forkavlauncher") == 0 ||
 		strcmp(argv[1], "--forkavworker") == 0 ||
+		strcmp(argv[1], "--forkproxy") == 0 ||
 		strcmp(argv[1], "--forkboot") == 0 ||
 		strncmp(argv[1], "--forkbgworker=", 15) == 0)
 		PGSharedMemoryReAttach();
@@ -5021,6 +5190,19 @@ SubPostmasterMain(int argc, char *argv[])
 
 		AutoVacWorkerMain(argc - 2, argv + 2);	/* does not return */
 	}
+	if (strcmp(argv[1], "--forkproxy") == 0)
+	{
+		/* Restore basic shared memory pointers */
+		InitShmemAccess(UsedShmemSegAddr);
+
+		/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */
+		InitProcess();
+
+		/* Attach process to shared data structures */
+		CreateSharedMemoryAndSemaphores(0);
+
+		ConnectionProxyMain(argc - 2, argv + 2);	/* does not return */
+	}
 	if (strncmp(argv[1], "--forkbgworker=", 15) == 0)
 	{
 		int			shmem_slot;
@@ -5564,6 +5746,74 @@ StartAutovacuumWorker(void)
 	}
 }
 
+/*
+ * StartProxyWorker
+ *		Start an proxy worker process.
+ *
+ * This function is here because it enters the resulting PID into the
+ * postmaster's private backends list.
+ *
+ * NB -- this code very roughly matches BackendStartup.
+ */
+static void
+StartProxyWorker(int id)
+{
+	Backend    *bn;
+	int         pid;
+
+	/*
+	 * Compute the cancel key that will be assigned to this session. We
+	 * probably don't need cancel keys for autovac workers, but we'd
+	 * better have something random in the field to prevent unfriendly
+	 * people from sending cancels to them.
+	 */
+	if (!RandomCancelKey(&MyCancelKey))
+	{
+		ereport(LOG,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("could not generate random cancel key")));
+		return   ;
+	}
+	bn = (Backend *) malloc(sizeof(Backend));
+	if (bn)
+	{
+		bn->cancel_key = MyCancelKey;
+
+		/* Autovac workers are not dead_end and need a child slot */
+		bn->dead_end = false;
+		bn->child_slot = MyPMChildSlot = AssignPostmasterChildSlot();
+		bn->bgworker_notify = false;
+
+		MyProxyId = id;
+		MyProxySocket = ConnectionProxies[id].socks[1];
+		pid = ConnectionProxyStart();
+		if (pid > 0)
+		{
+			bn->pid = pid;
+			bn->bkend_type = BACKEND_TYPE_BGWORKER;
+			dlist_push_head(&BackendList, &bn->elem);
+#ifdef EXEC_BACKEND
+			ShmemBackendArrayAdd(bn);
+#endif
+			/* all OK */
+			ConnectionProxies[id].pid = pid;
+			ProxyState[id].pid = pid;
+			return;
+		}
+
+		/*
+		 * fork failed, fall through to report -- actual error message was
+		 * logged by ConnectionProxyStart
+		 */
+		(void) ReleasePostmasterChildSlot(bn->child_slot);
+		free(bn);
+	}
+	else
+		ereport(LOG,
+				(errcode(ERRCODE_OUT_OF_MEMORY),
+				 errmsg("out of memory")));
+}
+
 /*
  * MaybeStartWalReceiver
  *		Start the WAL receiver process, if not running and our state allows.
@@ -6170,6 +6420,10 @@ save_backend_variables(BackendParameters *param, Port *port,
 
 	strlcpy(param->pkglib_path, pkglib_path, MAXPGPATH);
 
+	if (!write_inheritable_socket(&param->proxySocket, MyProxySocket, childPid))
+		return false;
+	param->proxyId = MyProxyId;
+
 	return true;
 }
 
@@ -6400,6 +6654,9 @@ restore_backend_variables(BackendParameters *param, Port *port)
 
 	strlcpy(pkglib_path, param->pkglib_path, MAXPGPATH);
 
+	read_inheritable_socket(&MyProxySocket, &param->proxySocket);
+	MyProxyId = param->proxyId;
+
 	/*
 	 * We need to restore fd.c's counts of externally-opened FDs; to avoid
 	 * confusion, be sure to do this after restoring max_safe_fds.  (Note:
diff --git a/src/backend/postmaster/proxy.c b/src/backend/postmaster/proxy.c
new file mode 100644
index 0000000000..9df2fc4a0b
--- /dev/null
+++ b/src/backend/postmaster/proxy.c
@@ -0,0 +1,1514 @@
+#include <unistd.h>
+#include <errno.h>
+
+#include "postgres.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
+#include "postmaster/fork_process.h"
+#include "access/htup_details.h"
+#include "replication/walsender.h"
+#include "storage/ipc.h"
+#include "storage/latch.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+#include "utils/builtins.h"
+#include "utils/memutils.h"
+#include "utils/timestamp.h"
+#include "libpq/libpq.h"
+#include "libpq/libpq-be.h"
+#include "libpq/pqsignal.h"
+#include "libpq/pqformat.h"
+#include "tcop/tcopprot.h"
+#include "utils/timeout.h"
+#include "utils/ps_status.h"
+#include "../interfaces/libpq/libpq-fe.h"
+#include "../interfaces/libpq/libpq-int.h"
+
+#define INIT_BUF_SIZE	   (64*1024)
+#define MAX_READY_EVENTS   128
+#define DB_HASH_SIZE	   101
+#define PROXY_WAIT_TIMEOUT 1000 /* 1 second */
+
+struct SessionPool;
+struct Proxy;
+
+typedef struct
+{
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+}
+SessionPoolKey;
+
+#define NULLSTR(s) ((s) ? (s) : "?")
+
+/*
+ * Channels represent both clients and backends
+ */
+typedef struct Channel
+{
+	int      magic;
+	char*	 buf;
+	int		 rx_pos;
+	int		 tx_pos;
+	int		 tx_size;
+	int		 buf_size;
+	int		 event_pos;			 /* Position of wait event returned by AddWaitEventToSet */
+
+	Port*	 client_port;		 /* Not null for client, null for server */
+
+	pgsocket backend_socket;
+	PGPROC*	 backend_proc;
+	int		 backend_pid;
+	bool	 backend_is_tainted; /* client changes session context */
+	bool	 backend_is_ready;	 /* ready for query */
+	bool	 is_interrupted;	 /* client interrupts query execution */
+	bool	 is_disconnected;	 /* connection is lost */
+	bool     is_idle;            /* no activity on this channel */
+	bool     in_transaction;     /* inside transaction body */
+	bool	 edge_triggered;	 /* emulate epoll EPOLLET (edge-triggered) flag */
+	/* We need to save startup packet response to be able to send it to new connection */
+	int		 handshake_response_size;
+	char*	 handshake_response;
+	TimestampTz backend_last_activity;   /* time of last backend activity */
+	char*    gucs;               /* concatenated "SET var=" commands for this session */
+	char*    prev_gucs;          /* previous value of "gucs" to perform rollback in case of error */
+	struct Channel* peer;
+	struct Channel* next;
+	struct Proxy*	proxy;
+	struct SessionPool* pool;
+}
+Channel;
+
+#define ACTIVE_CHANNEL_MAGIC    0xDEFA1234U
+#define REMOVED_CHANNEL_MAGIC   0xDEADDEEDU
+
+/*
+ * Control structure for connection proxies (several proxy workers can be launched and each has its own proxy instance).
+ * Proxy contains hash of session pools for reach role/dbname combination.
+ */
+typedef struct Proxy
+{
+	MemoryContext parse_ctx;	 /* Temporary memory context used for parsing startup packet */
+	WaitEventSet* wait_events;	 /* Set of socket descriptors of backends and clients socket descriptors */
+	HTAB*	 pools;				 /* Session pool map with dbname/role used as a key */
+	int		 n_accepted_connections; /* Number of accepted, but not yet established connections
+									  * (startup packet is not received and db/role are not known) */
+	int		 max_backends;		 /* Maximal number of backends per database */
+	bool	 shutdown;			 /* Shutdown flag */
+	Channel* hangout;			 /* List of disconnected backends */
+	ConnectionProxyState* state; /* State of proxy */
+	TimestampTz last_idle_timeout_check;  /* Time of last check for idle worker timeout expration */
+} Proxy;
+
+/*
+ * Connection pool to particular role/dbname
+ */
+typedef struct SessionPool
+{
+	SessionPoolKey key;
+	Channel* idle_backends;		  /* List of idle clients */
+	Channel* pending_clients;	  /* List of clients waiting for free backend */
+	Proxy*	 proxy;				  /* Owner of this pool */
+	int		 n_launched_backends; /* Total number of launched backends */
+	int		 n_dedicated_backends;/* Number of dedicated (tainted) backends */
+	int		 n_idle_backends;	  /* Number of backends in idle state */
+	int		 n_connected_clients; /* Total number of connected clients */
+	int		 n_idle_clients;	  /* Number of clients in idle state */
+	int		 n_pending_clients;	  /* Number of clients waiting for free backend */
+	List*    startup_gucs;        /* List of startup options specified in startup packet */
+	char*    cmdline_options;     /* Command line options passed to backend */
+}
+SessionPool;
+
+static void channel_remove(Channel* chan);
+static Channel* backend_start(SessionPool* pool, char** error);
+static bool channel_read(Channel* chan);
+static bool channel_write(Channel* chan, bool synchronous);
+static void channel_hangout(Channel* chan, char const* op);
+static ssize_t socket_write(Channel* chan, char const* buf, size_t size);
+
+/*
+ * #define ELOG(severity, fmt,...) elog(severity, "PROXY: " fmt, ## __VA_ARGS__)
+ */
+#define ELOG(severity,fmt,...)
+
+static Proxy* proxy;
+int MyProxyId;
+pgsocket MyProxySocket;
+ConnectionProxyState* ProxyState;
+
+/**
+ * Backend is ready for next command outside transaction block (idle state).
+ * Now if backend is not tainted it is possible to schedule some other client to this backend.
+ */
+static bool
+backend_reschedule(Channel* chan, bool is_new)
+{
+	chan->backend_is_ready = false;
+	if (chan->backend_proc == NULL) /* Lazy resolving of PGPROC entry */
+	{
+		Assert(chan->backend_pid != 0);
+		chan->backend_proc = BackendPidGetProc(chan->backend_pid);
+		Assert(chan->backend_proc); /* If backend completes execution of some query, then it has definitely registered itself in procarray */
+	}
+	if (is_new || (!chan->backend_is_tainted && !chan->backend_proc->is_tainted)) /* If backend is not storing some session context */
+	{
+		Channel* pending = chan->pool->pending_clients;
+		if (chan->peer)
+		{
+			chan->peer->peer = NULL;
+			chan->pool->n_idle_clients += 1;
+			chan->pool->proxy->state->n_idle_clients += 1;
+			chan->peer->is_idle = true;
+		}
+		if (pending)
+		{
+			/* Has pending clients: serve one of them */
+			ELOG(LOG, "Backed %d is reassigned to client %p", chan->backend_pid, pending);
+			chan->pool->pending_clients = pending->next;
+			Assert(chan != pending);
+			chan->peer = pending;
+			pending->peer = chan;
+			chan->pool->n_pending_clients -= 1;
+			if (pending->tx_size == 0) /* new client has sent startup packet and we now need to send handshake response */
+			{
+				Assert(chan->handshake_response != NULL); /* backend already sent handshake response */
+				Assert(chan->handshake_response_size < chan->buf_size);
+				memcpy(chan->buf, chan->handshake_response, chan->handshake_response_size);
+				chan->rx_pos = chan->tx_size = chan->handshake_response_size;
+				ELOG(LOG, "Simulate response for startup packet to client %p", pending);
+				chan->backend_is_ready = true;
+				return channel_write(pending, false);
+			}
+			else
+			{
+				ELOG(LOG, "Try to send pending request from client %p to backend %p (pid %d)", pending, chan, chan->backend_pid);
+				Assert(pending->tx_pos == 0 && pending->rx_pos >= pending->tx_size);
+				return channel_write(chan, false); /* Send pending request to backend */
+			}
+		}
+		else /* return backend to the list of idle backends */
+		{
+			ELOG(LOG, "Backed %d is idle", chan->backend_pid);
+			Assert(!chan->client_port);
+			chan->next = chan->pool->idle_backends;
+			chan->pool->idle_backends = chan;
+			chan->pool->n_idle_backends += 1;
+			chan->pool->proxy->state->n_idle_backends += 1;
+			chan->is_idle = true;
+			chan->peer = NULL;
+		}
+	}
+	else if (!chan->backend_is_tainted) /* if it was not marked as tainted before... */
+	{
+		ELOG(LOG, "Backed %d is tainted", chan->backend_pid);
+		chan->backend_is_tainted = true;
+		chan->proxy->state->n_dedicated_backends += 1;
+		chan->pool->n_dedicated_backends += 1;
+	}
+	return true;
+}
+
+static size_t
+string_length(char const* str)
+{
+	size_t spaces = 0;
+	char const* p = str;
+	if (p == NULL)
+		return 0;
+	while (*p != '\0')
+		spaces += (*p++ == ' ');
+	return (p - str) + spaces;
+}
+
+static size_t
+string_list_length(List* list)
+{
+	ListCell *cell;
+	size_t length = 0;
+	foreach (cell, list)
+	{
+		length += strlen((char*)lfirst(cell));
+	}
+	return length;
+}
+
+static List*
+string_list_copy(List* orig)
+{
+	List* copy = list_copy(orig);
+	ListCell *cell;
+	foreach (cell, copy)
+	{
+		lfirst(cell) = pstrdup((char*)lfirst(cell));
+	}
+	return copy;
+}
+
+static bool
+string_list_equal(List* a, List* b)
+{
+	const ListCell *ca, *cb;
+	if (list_length(a) != list_length(b))
+		return false;
+	forboth(ca, a, cb, b)
+		if (strcmp(lfirst(ca), lfirst(cb)) != 0)
+			return false;
+	return true;
+}
+
+static char*
+string_append(char* dst, char const* src)
+{
+	while (*src)
+	{
+		if (*src == ' ')
+			*dst++ = '\\';
+		*dst++ = *src++;
+	}
+	return dst;
+}
+
+static bool
+string_equal(char const* a, char const* b)
+{
+	return a == b ? true : a == NULL || b == NULL ? false : strcmp(a, b) == 0;
+}
+
+/**
+ * Parse client's startup packet and assign client to proper connection pool based on dbname/role
+ */
+static bool
+client_connect(Channel* chan, int startup_packet_size)
+{
+	bool found;
+	SessionPoolKey key;
+	char* startup_packet = chan->buf;
+	MemoryContext proxy_ctx;
+
+	Assert(chan->client_port);
+
+	/* parse startup packet in parse_ctx memory context and reset it when it is not needed any more */
+	MemoryContextReset(chan->proxy->parse_ctx);
+	proxy_ctx = MemoryContextSwitchTo(chan->proxy->parse_ctx);
+
+	/* Associate libpq with client's port */
+	MyProcPort = chan->client_port;
+	pq_init();
+
+	if (ParseStartupPacket(chan->client_port, chan->proxy->parse_ctx, startup_packet+4, startup_packet_size-4, false, false) != STATUS_OK) /* skip packet size */
+	{
+		MyProcPort = NULL;
+		MemoryContextSwitchTo(proxy_ctx);
+		elog(WARNING, "Failed to parse startup packet for client %p", chan);
+		return false;
+	}
+	MyProcPort = NULL;
+	MemoryContextSwitchTo(proxy_ctx);
+	if (am_walsender)
+	{
+		elog(WARNING, "WAL sender should not be connected through proxy");
+		return false;
+	}
+
+	chan->proxy->state->n_ssl_clients += chan->client_port->ssl_in_use;
+	pg_set_noblock(chan->client_port->sock); /* SSL handshake may switch socket to blocking mode */
+	memset(&key, 0, sizeof(key));
+	strlcpy(key.database, chan->client_port->database_name, NAMEDATALEN);
+	if (MultitenantProxy)
+		chan->gucs = psprintf("set local role %s;", chan->client_port->user_name);
+	else
+		strlcpy(key.username, chan->client_port->user_name, NAMEDATALEN);
+
+	ELOG(LOG, "Client %p connects to %s/%s", chan, key.database, key.username);
+
+	chan->pool = (SessionPool*)hash_search(chan->proxy->pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* First connection to this role/dbname */
+		chan->proxy->state->n_pools += 1;
+		chan->pool->startup_gucs = NULL;
+		chan->pool->cmdline_options = NULL;
+		memset((char*)chan->pool + sizeof(SessionPoolKey), 0, sizeof(SessionPool) - sizeof(SessionPoolKey));
+	}
+	if (ProxyingGUCs)
+	{
+		ListCell *gucopts = list_head(chan->client_port->guc_options);
+		while (gucopts)
+		{
+			char	   *name;
+			char	   *value;
+
+			name = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			value = lfirst(gucopts);
+			gucopts = lnext(chan->client_port->guc_options, gucopts);
+
+			chan->gucs = psprintf("%sset local %s='%s';", chan->gucs ? chan->gucs : "", name, value);
+		}
+	}
+	else
+	{
+		/* Assume that all clients are using the same set of GUCs.
+		 * Use then for launching pooler worker backends and report error
+		 * if GUCs in startup packets are different.
+		 */
+		if (chan->pool->n_launched_backends == chan->pool->n_dedicated_backends)
+		{
+			list_free(chan->pool->startup_gucs);
+			if (chan->pool->cmdline_options)
+				pfree(chan->pool->cmdline_options);
+
+			chan->pool->startup_gucs = string_list_copy(chan->client_port->guc_options);
+			if (chan->client_port->cmdline_options)
+				chan->pool->cmdline_options = pstrdup(chan->client_port->cmdline_options);
+		}
+		else
+		{
+			if (!string_list_equal(chan->pool->startup_gucs, chan->client_port->guc_options) ||
+				!string_equal(chan->pool->cmdline_options, chan->client_port->cmdline_options))
+			{
+				elog(LOG, "Ignoring startup GUCs of client %s",
+					 NULLSTR(chan->client_port->application_name));
+			}
+		}
+	}
+	chan->pool->proxy = chan->proxy;
+	chan->pool->n_connected_clients += 1;
+	chan->proxy->n_accepted_connections -= 1;
+	chan->pool->n_idle_clients += 1;
+	chan->pool->proxy->state->n_idle_clients += 1;
+	chan->is_idle = true;
+	return true;
+}
+
+/*
+ * Send error message to the client. This function is called when new backend can not be started
+ * or client is assigned to the backend because of configuration limitations.
+ */
+static void
+report_error_to_client(Channel* chan, char const* error)
+{
+	StringInfoData msgbuf;
+	initStringInfo(&msgbuf);
+	pq_sendbyte(&msgbuf, 'E');
+	pq_sendint32(&msgbuf, 7 + strlen(error));
+	pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);
+	pq_sendstring(&msgbuf, error);
+	pq_sendbyte(&msgbuf, '\0');
+	socket_write(chan, msgbuf.data, msgbuf.len);
+	pfree(msgbuf.data);
+}
+
+/*
+ * Attach client to backend. Return true if new backend is attached, false otherwise.
+ */
+static bool
+client_attach(Channel* chan)
+{
+	Channel* idle_backend = chan->pool->idle_backends;
+	chan->is_idle = false;
+	chan->pool->n_idle_clients -= 1;
+	chan->pool->proxy->state->n_idle_clients -= 1;
+	if (idle_backend)
+	{
+		/* has some idle backend */
+		Assert(!idle_backend->backend_is_tainted && !idle_backend->client_port);
+		Assert(chan != idle_backend);
+		chan->peer = idle_backend;
+		idle_backend->peer = chan;
+		chan->pool->idle_backends = idle_backend->next;
+		chan->pool->n_idle_backends -= 1;
+		chan->pool->proxy->state->n_idle_backends -= 1;
+		idle_backend->is_idle = false;
+		if (IdlePoolWorkerTimeout)
+			chan->backend_last_activity = GetCurrentTimestamp();
+		ELOG(LOG, "Attach client %p to backend %p (pid %d)", chan, idle_backend, idle_backend->backend_pid);
+	}
+	else /* all backends are busy */
+	{
+		if (chan->pool->n_launched_backends < chan->proxy->max_backends)
+		{
+			char* error;
+			/* Try to start new backend */
+			idle_backend = backend_start(chan->pool, &error);
+			if (idle_backend != NULL)
+			{
+				ELOG(LOG, "Start new backend %p (pid %d) for client %p",
+					 idle_backend, idle_backend->backend_pid, chan);
+				Assert(chan != idle_backend);
+				chan->peer = idle_backend;
+				idle_backend->peer = chan;
+				if (IdlePoolWorkerTimeout)
+					idle_backend->backend_last_activity = GetCurrentTimestamp();
+				return true;
+			}
+			else
+			{
+				if (error)
+				{
+					report_error_to_client(chan, error);
+					free(error);
+				}
+				channel_hangout(chan, "connect");
+				return false;
+			}
+		}
+		/* Postpone handshake until some backend is available */
+		ELOG(LOG, "Client %p is waiting for available backends", chan);
+		chan->next = chan->pool->pending_clients;
+		chan->pool->pending_clients = chan;
+		chan->pool->n_pending_clients += 1;
+	}
+	return false;
+}
+
+/*
+ * Handle communication failure for this channel.
+ * It is not possible to remove channel immediately because it can be triggered by other epoll events.
+ * So link all channels in L1 list for pending delete.
+ */
+static void
+channel_hangout(Channel* chan, char const* op)
+{
+	Channel** ipp;
+	Channel* peer = chan->peer;
+	if (chan->is_disconnected || chan->pool == NULL)
+	   return;
+
+	if (chan->client_port) {
+		ELOG(LOG, "Hangout client %p due to %s error: %m", chan, op);
+		for (ipp = &chan->pool->pending_clients; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				*ipp = chan->next;
+				chan->pool->n_pending_clients -= 1;
+				break;
+			}
+		}
+		if (chan->is_idle)
+		{
+			chan->pool->n_idle_clients -= 1;
+			chan->pool->proxy->state->n_idle_clients -= 1;
+			chan->is_idle = false;
+		}
+	}
+	else
+	{
+		ELOG(LOG, "Hangout backend %p (pid %d) due to %s error: %m", chan, chan->backend_pid, op);
+		for (ipp = &chan->pool->idle_backends; *ipp != NULL; ipp = &(*ipp)->next)
+		{
+			if (*ipp == chan)
+			{
+				Assert (chan->is_idle);
+				*ipp = chan->next;
+				chan->pool->n_idle_backends -= 1;
+				chan->pool->proxy->state->n_idle_backends -= 1;
+				chan->is_idle = false;
+				break;
+			}
+		}
+	}
+	if (peer)
+	{
+		peer->peer = NULL;
+		chan->peer = NULL;
+	}
+	chan->backend_is_ready = false;
+
+	if (chan->client_port && peer) /* If it is client connected to backend. */
+	{
+		if (!chan->is_interrupted) /* Client didn't sent 'X' command, so do it for him. */
+		{
+			ELOG(LOG, "Send terminate command to backend %p (pid %d)", peer, peer->backend_pid);
+			peer->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+			channel_write(peer, false);
+			return;
+		}
+		else if (!peer->is_interrupted)
+		{
+			/* Client already sent 'X' command, so we can safely reschedule backend to some other client session */
+			backend_reschedule(peer, false);
+		}
+	}
+	chan->next = chan->proxy->hangout;
+	chan->proxy->hangout = chan;
+	chan->is_disconnected = true;
+}
+
+/*
+ * Try to write data to the socket.
+ */
+static ssize_t
+socket_write(Channel* chan, char const* buf, size_t size)
+{
+	ssize_t rc;
+#ifdef USE_SSL
+	int waitfor = 0;
+	if (chan->client_port && chan->client_port->ssl_in_use)
+		rc = be_tls_write(chan->client_port, (char*)buf, size, &waitfor);
+	else
+#endif
+		rc = chan->client_port
+			? secure_raw_write(chan->client_port, buf, size)
+			: send(chan->backend_socket, buf, size, 0);
+	if (rc == 0 || (rc < 0 && (errno != EAGAIN && errno != EWOULDBLOCK)))
+	{
+		channel_hangout(chan, "write");
+	}
+	return rc;
+}
+
+
+/*
+ * Try to send some data to the channel.
+ * Data is located in the peer buffer. Because of using edge-triggered mode we have have to use non-blocking IO
+ * and try to write all available data. Once write is completed we should try to read more data from source socket.
+ * "synchronous" flag is used to avoid infinite recursion or reads-writers.
+ * Returns true if there is nothing to do or operation is successfully completed, false in case of error
+ * or socket buffer is full.
+ */
+static bool
+channel_write(Channel* chan, bool synchronous)
+{
+	Channel* peer = chan->peer;
+	if (!chan->client_port && chan->is_interrupted)
+	{
+		/* Send terminate command to the backend. */
+		char const terminate[] = {'X', 0, 0, 0, 4};
+		if (socket_write(chan, terminate, sizeof(terminate)) <= 0)
+			return false;
+		channel_hangout(chan, "terminate");
+		return true;
+	}
+	if (peer == NULL)
+		return false;
+
+	while (peer->tx_pos < peer->tx_size) /* has something to write */
+	{
+		ssize_t rc = socket_write(chan, peer->buf + peer->tx_pos, peer->tx_size - peer->tx_pos);
+
+		ELOG(LOG, "%p: write %d tx_pos=%d, tx_size=%d: %m", chan, (int)rc, peer->tx_pos, peer->tx_size);
+		if (rc <= 0)
+			return false;
+
+		if (!chan->client_port)
+			ELOG(LOG, "Send command %c from client %d to backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], peer->client_port->sock, chan->backend_pid, chan, chan->backend_is_ready);
+		else
+			ELOG(LOG, "Send reply %c to client %d from backend %d (%p:ready=%d)", peer->buf[peer->tx_pos], chan->client_port->sock, peer->backend_pid, peer, peer->backend_is_ready);
+
+		if (chan->client_port)
+			chan->proxy->state->tx_bytes += rc;
+		else
+			chan->proxy->state->rx_bytes += rc;
+		if (rc > 0 && chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE|WL_SOCKET_READABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+		peer->tx_pos += rc;
+	}
+	if (peer->tx_size != 0)
+	{
+		/* Copy rest of received data to the beginning of the buffer */
+		chan->backend_is_ready = false;
+		Assert(peer->rx_pos >= peer->tx_size);
+		memmove(peer->buf, peer->buf + peer->tx_size, peer->rx_pos - peer->tx_size);
+		peer->rx_pos -= peer->tx_size;
+		peer->tx_pos = peer->tx_size = 0;
+		if (peer->backend_is_ready) {
+			Assert(peer->rx_pos == 0);
+			backend_reschedule(peer, false);
+			return true;
+		}
+	}
+	return synchronous || channel_read(peer); /* write is not invoked from read */
+}
+
+static bool
+is_transaction_start(char* stmt)
+{
+	return pg_strncasecmp(stmt, "begin", 5) == 0 || pg_strncasecmp(stmt, "start", 5) == 0;
+}
+
+static bool
+is_transactional_statement(char* stmt)
+{
+	static char const* const non_tx_stmts[] = {
+		"create tablespace",
+		"create database",
+		"cluster",
+		"drop",
+		"discard",
+		"reindex",
+		"rollback",
+		"vacuum",
+		NULL
+	};
+	int i;
+	for (i = 0; non_tx_stmts[i]; i++)
+	{
+		if (pg_strncasecmp(stmt, non_tx_stmts[i], strlen(non_tx_stmts[i])) == 0)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Try to read more data from the channel and send it to the peer.
+ */
+static bool
+channel_read(Channel* chan)
+{
+	int	 msg_start;
+	while (chan->tx_size == 0) /* there is no pending write op */
+	{
+		ssize_t rc;
+		bool handshake = false;
+#ifdef USE_SSL
+		int waitfor = 0;
+		if (chan->client_port && chan->client_port->ssl_in_use)
+			rc = be_tls_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, &waitfor);
+		else
+#endif
+			rc = chan->client_port
+				? secure_raw_read(chan->client_port, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos)
+				: recv(chan->backend_socket, chan->buf + chan->rx_pos, chan->buf_size - chan->rx_pos, 0);
+		ELOG(LOG, "%p: read %d: %m", chan, (int)rc);
+
+		if (rc <= 0)
+		{
+			if (rc == 0 || (errno != EAGAIN && errno != EWOULDBLOCK))
+				channel_hangout(chan, "read");
+			return false; /* wait for more data */
+		}
+		else if (chan->edge_triggered)
+		{
+			/* resume accepting all events */
+			ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE, NULL);
+			chan->edge_triggered = false;
+		}
+
+		if (!chan->client_port)
+			ELOG(LOG, "Receive reply %c %d bytes from backend %d (%p:ready=%d) to client %d", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->backend_pid, chan, chan->backend_is_ready, chan->peer ? chan->peer->client_port->sock : -1);
+		else
+			ELOG(LOG, "Receive command %c %d bytes from client %d to backend %d (%p:ready=%d)", chan->buf[0] ? chan->buf[0] : '?', (int)rc + chan->rx_pos, chan->client_port->sock, chan->peer ? chan->peer->backend_pid : -1, chan->peer, chan->peer ? chan->peer->backend_is_ready : -1);
+
+		chan->rx_pos += rc;
+		msg_start = 0;
+
+		/* Loop through all received messages */
+		while (chan->rx_pos - msg_start >= 5) /* has message code + length */
+		{
+			int msg_len;
+			uint32 new_msg_len;
+			if (chan->pool == NULL) /* process startup packet */
+			{
+				Assert(msg_start == 0);
+				memcpy(&msg_len, chan->buf + msg_start, sizeof(msg_len));
+				msg_len = ntohl(msg_len);
+				handshake = true;
+			}
+			else
+			{
+				ELOG(LOG, "%p receive message %c", chan, chan->buf[msg_start]);
+				memcpy(&msg_len, chan->buf + msg_start + 1, sizeof(msg_len));
+				msg_len = ntohl(msg_len) + 1;
+			}
+			if (msg_start + msg_len > chan->buf_size)
+			{
+				/* Reallocate buffer to fit complete message body */
+				chan->buf_size = msg_start + msg_len;
+				chan->buf = repalloc(chan->buf, chan->buf_size);
+			}
+			if (chan->rx_pos - msg_start >= msg_len) /* Message is completely fetched */
+			{
+				if (chan->pool == NULL) /* receive startup packet */
+				{
+					Assert(chan->client_port);
+					if (!client_connect(chan, msg_len))
+					{
+						/* Some trouble with processing startup packet */
+						chan->is_disconnected = true;
+						channel_remove(chan);
+						return false;
+					}
+				}
+				else if (!chan->client_port) /* Message from backend */
+				{
+					if (chan->buf[msg_start] == 'Z'	/* Ready for query */
+						&& chan->buf[msg_start+5] == 'I') /* Transaction block status is idle */
+					{
+						Assert(chan->rx_pos - msg_start == msg_len); /* Should be last message */
+						chan->backend_is_ready = true; /* Backend is ready for query */
+						chan->proxy->state->n_transactions += 1;
+						if (chan->peer)
+							chan->peer->in_transaction = false;
+					}
+					else if (chan->buf[msg_start] == 'E')	/* Error */
+					{
+						if (chan->peer && chan->peer->prev_gucs)
+						{
+							/* Undo GUC assignment */
+							pfree(chan->peer->gucs);
+							chan->peer->gucs = chan->peer->prev_gucs;
+							chan->peer->prev_gucs = NULL;
+						}
+					}
+				}
+				else if (chan->client_port) /* Message from client */
+				{
+					if (chan->buf[msg_start] == 'X')	/* Terminate message */
+					{
+						Channel* backend = chan->peer;
+						elog(DEBUG1, "Receive 'X' to backend %d", backend != NULL ? backend->backend_pid : 0);
+						chan->is_interrupted = true;
+						if (backend != NULL && !backend->backend_is_ready && !backend->backend_is_tainted)
+						{
+							/* If client send abort inside transaction, then mark backend as tainted */
+							backend->backend_is_tainted = true;
+							chan->proxy->state->n_dedicated_backends += 1;
+							chan->pool->n_dedicated_backends += 1;
+						}
+						if (backend == NULL || !backend->backend_is_tainted)
+						{
+							/* Skip terminate message to idle and non-tainted backends */
+							channel_hangout(chan, "terminate");
+							return false;
+						}
+					}
+					else if ((ProxyingGUCs || MultitenantProxy)
+							 && chan->buf[msg_start] == 'Q' && !chan->in_transaction)
+					{
+						char* stmt = &chan->buf[msg_start+5];
+						if (chan->prev_gucs)
+						{
+							pfree(chan->prev_gucs);
+							chan->prev_gucs = NULL;
+						}
+						if (ProxyingGUCs
+							&& ((pg_strncasecmp(stmt, "set", 3) == 0
+								 && pg_strncasecmp(stmt+3, " local", 6) != 0)
+								|| pg_strncasecmp(stmt, "reset", 5) == 0))
+						{
+							char* new_msg;
+							chan->prev_gucs = chan->gucs ? chan->gucs : pstrdup("");
+							if (pg_strncasecmp(stmt, "reset", 5) == 0)
+							{
+								char* semi = strchr(stmt+5, ';');
+								if (semi)
+									*semi = '\0';
+								chan->gucs = psprintf("%sset local%s=default;",
+													  chan->prev_gucs, stmt+5);
+							}
+							else
+							{
+								char* param = stmt + 3;
+								if (pg_strncasecmp(param, " session", 8) == 0)
+									param += 8;
+								chan->gucs = psprintf("%sset local%s%c", chan->prev_gucs, param,
+													  chan->buf[chan->rx_pos-2] == ';' ? ' ' : ';');
+							}
+							new_msg = chan->gucs + strlen(chan->prev_gucs);
+							Assert(msg_start + strlen(new_msg)*2 + 6 < chan->buf_size);
+							/*
+							 * We need to send SET command to check if it is correct.
+							 * To avoid "SET LOCAL can only be used in transaction blocks"
+							 * error we need to construct block. Let's just double the command.
+							 */
+							msg_len = sprintf(stmt, "%s%s", new_msg, new_msg) + 6;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+							chan->rx_pos = msg_start + msg_len;
+						}
+						else if (chan->gucs && is_transactional_statement(stmt))
+						{
+							size_t gucs_len = strlen(chan->gucs);
+							if (chan->rx_pos + gucs_len + 1 > chan->buf_size)
+							{
+								/* Reallocate buffer to fit concatenated GUCs */
+								chan->buf_size = chan->rx_pos + gucs_len + 1;
+								chan->buf = repalloc(chan->buf, chan->buf_size);
+							}
+							if (is_transaction_start(stmt))
+							{
+								/* Append GUCs after BEGIN command to include them in transaction body */
+								Assert(chan->buf[chan->rx_pos-1] == '\0');
+								if (chan->buf[chan->rx_pos-2] != ';')
+								{
+									chan->buf[chan->rx_pos-1] = ';';
+									chan->rx_pos += 1;
+									msg_len += 1;
+								}
+								memcpy(&chan->buf[chan->rx_pos-1], chan->gucs, gucs_len+1);
+								chan->in_transaction = true;
+							}
+							else
+							{
+								/* Prepend standalone command with GUCs */
+								memmove(stmt + gucs_len, stmt, msg_len);
+								memcpy(stmt, chan->gucs, gucs_len);
+							}
+							chan->rx_pos += gucs_len;
+							msg_len += gucs_len;
+							new_msg_len = pg_hton32(msg_len - 1);
+							memcpy(&chan->buf[msg_start+1], &new_msg_len, sizeof(new_msg_len));
+						}
+						else if (is_transaction_start(stmt))
+							chan->in_transaction = true;
+					}
+				}
+				msg_start += msg_len;
+			}
+			else break; /* Incomplete message. */
+		}
+		elog(DEBUG1, "Message size %d", msg_start);
+		if (msg_start != 0)
+		{
+			/* Has some complete messages to send to peer */
+			if (chan->peer == NULL)	 /* client is not yet connected to backend */
+			{
+				if (!chan->client_port)
+				{
+					/* We are not expecting messages from idle backend. Assume that it some error or shutdown. */
+					channel_hangout(chan, "idle");
+					return false;
+				}
+				client_attach(chan);
+				if (handshake) /* Send handshake response to the client */
+				{
+					/* If we attach new client to the existed backend, then we need to send handshake response to the client */
+					Channel* backend = chan->peer;
+					chan->rx_pos = 0; /* Skip startup packet */
+					if (backend != NULL) /* Backend was assigned */
+					{
+						Assert(backend->handshake_response != NULL); /* backend has already sent handshake responses */
+						Assert(backend->handshake_response_size < backend->buf_size);
+						memcpy(backend->buf, backend->handshake_response, backend->handshake_response_size);
+						backend->rx_pos = backend->tx_size = backend->handshake_response_size;
+						backend->backend_is_ready = true;
+						elog(DEBUG1, "Send handshake response to the client");
+						return channel_write(chan, false);
+					}
+					else
+					{
+						/* Handshake response will be send to client later when backend is assigned */
+						elog(DEBUG1, "Handshake response will be sent to the client later when backed is assigned");
+						return false;
+					}
+				}
+				else if (chan->peer == NULL) /* Backend was not assigned */
+				{
+					chan->tx_size = msg_start; /* query will be send later once backend is assigned */
+					elog(DEBUG1, "Query will be sent to this client later when backed is assigned");
+					return false;
+				}
+			}
+			Assert(chan->tx_pos == 0);
+			Assert(chan->rx_pos >= msg_start);
+			chan->tx_size = msg_start;
+			if (!channel_write(chan->peer, true))
+				return false;
+		}
+		/* If backend is out of transaction, then reschedule it */
+		if (chan->backend_is_ready)
+			return backend_reschedule(chan, false);
+
+		/* Do not try to read more data if edge-triggered mode is not supported */
+		if (!WaitEventUseEpoll)
+			break;
+	}
+	return true;
+}
+
+/*
+ * Create new channel.
+ */
+static Channel*
+channel_create(Proxy* proxy)
+{
+	Channel* chan = (Channel*)palloc0(sizeof(Channel));
+	chan->magic = ACTIVE_CHANNEL_MAGIC;
+	chan->proxy = proxy;
+	chan->buf = palloc(INIT_BUF_SIZE);
+	chan->buf_size = INIT_BUF_SIZE;
+	chan->tx_pos = chan->rx_pos = chan->tx_size = 0;
+	return chan;
+}
+
+/*
+ * Register new channel in wait event set.
+ */
+static bool
+channel_register(Proxy* proxy, Channel* chan)
+{
+	pgsocket sock = chan->client_port ? chan->client_port->sock : chan->backend_socket;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(sock);
+	chan->event_pos =
+		AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE|WL_SOCKET_WRITEABLE|WL_SOCKET_EDGE,
+						  sock, NULL, chan);
+	if (chan->event_pos < 0)
+	{
+		elog(WARNING, "PROXY: Failed to add new client - too much sessions: %d clients, %d backends. "
+					 "Try to increase 'max_sessions' configuration parameter.",
+					 proxy->state->n_clients, proxy->state->n_backends);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Start new backend for particular pool associated with dbname/role combination.
+ * Backend is forked using BackendStartup function.
+ */
+static Channel*
+backend_start(SessionPool* pool, char** error)
+{
+	Channel* chan;
+	char postmaster_port[8];
+	char* options = (char*)palloc(string_length(pool->cmdline_options) + string_list_length(pool->startup_gucs) + list_length(pool->startup_gucs)/2*5 + 1);
+	char const* keywords[] = {"port","dbname","user","sslmode","application_name","options",NULL};
+	char const* values[] = {postmaster_port,pool->key.database,pool->key.username,"disable","pool_worker",options,NULL};
+	PGconn* conn;
+	char* msg;
+	int int32_buf;
+	int msg_len;
+	static bool libpqconn_loaded;
+	ListCell *gucopts;
+	char* dst = options;
+
+	if (!libpqconn_loaded)
+	{
+		/* We need libpq library to be able to establish connections to pool workers.
+		* This library can not be linked statically, so load it on demand. */
+		load_file("libpqconn", false);
+		libpqconn_loaded = true;
+	}
+	pg_ltoa(PostPortNumber, postmaster_port);
+
+	gucopts = list_head(pool->startup_gucs);
+	if (pool->cmdline_options)
+		dst += sprintf(dst, "%s", pool->cmdline_options);
+	while (gucopts)
+	{
+		char	   *name;
+		char	   *value;
+
+		name = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		value = lfirst(gucopts);
+		gucopts = lnext(pool->startup_gucs, gucopts);
+
+		if (strcmp(name, "application_name") != 0)
+		{
+			dst += sprintf(dst, " -c %s=", name);
+			dst = string_append(dst, value);
+		}
+	}
+	*dst = '\0';
+	conn = LibpqConnectdbParams(keywords, values, error);
+	pfree(options);
+	if (!conn)
+		return NULL;
+
+	chan = channel_create(pool->proxy);
+	chan->pool = pool;
+	chan->backend_socket = conn->sock;
+	/* Using edge epoll mode requires non-blocking sockets */
+	pg_set_noblock(conn->sock);
+
+	/* Save handshake response */
+	chan->handshake_response_size = conn->inEnd;
+	chan->handshake_response = palloc(chan->handshake_response_size);
+	memcpy(chan->handshake_response, conn->inBuffer, chan->handshake_response_size);
+
+	/* Extract backend pid */
+	msg = chan->handshake_response;
+	while (*msg != 'K') /* Scan handshake response until we reach PID message */
+	{
+		memcpy(&int32_buf, ++msg, sizeof(int32_buf));
+		msg_len = ntohl(int32_buf);
+		msg += msg_len;
+		Assert(msg < chan->handshake_response + chan->handshake_response_size);
+	}
+	memcpy(&int32_buf, msg+5, sizeof(int32_buf));
+	chan->backend_pid = ntohl(int32_buf);
+
+	if (channel_register(pool->proxy, chan))
+	{
+		pool->proxy->state->n_backends += 1;
+		pool->n_launched_backends += 1;
+	}
+	else
+	{
+		*error = strdup("Too much sessios: try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(chan->backend_socket);
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(chan->buf);
+		pfree(chan);
+		chan = NULL;
+	}
+	return chan;
+}
+
+/*
+ * Add new client accepted by postmaster. This client will be assigned to concrete session pool
+ * when it's startup packet is received.
+ */
+static void
+proxy_add_client(Proxy* proxy, Port* port)
+{
+	Channel* chan = channel_create(proxy);
+	chan->client_port = port;
+	chan->backend_socket = PGINVALID_SOCKET;
+	if (channel_register(proxy, chan))
+	{
+		ELOG(LOG, "Add new client %p", chan);
+		proxy->n_accepted_connections += 1;
+		proxy->state->n_clients += 1;
+	}
+	else
+	{
+		report_error_to_client(chan, "Too much sessions. Try to increase 'max_sessions' configuration parameter");
+		/* Too much sessions, error report was already logged */
+		closesocket(port->sock);
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+		pfree(port->gss);
+#endif
+		chan->magic = REMOVED_CHANNEL_MAGIC;
+		pfree(port);
+		pfree(chan->buf);
+		pfree(chan);
+	}
+}
+
+/*
+ * Perform delayed deletion of channel
+ */
+static void
+channel_remove(Channel* chan)
+{
+	Assert(chan->is_disconnected); /* should be marked as disconnected by channel_hangout */
+	DeleteWaitEventFromSet(chan->proxy->wait_events, chan->event_pos);
+	if (chan->client_port)
+	{
+		if (chan->pool)
+			chan->pool->n_connected_clients -= 1;
+		else
+			chan->proxy->n_accepted_connections -= 1;
+		chan->proxy->state->n_clients -= 1;
+		chan->proxy->state->n_ssl_clients -= chan->client_port->ssl_in_use;
+		closesocket(chan->client_port->sock);
+		pfree(chan->client_port);
+		if (chan->gucs)
+			pfree(chan->gucs);
+		if (chan->prev_gucs)
+			pfree(chan->prev_gucs);
+	}
+	else
+	{
+		chan->proxy->state->n_backends -= 1;
+		chan->proxy->state->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_dedicated_backends -= chan->backend_is_tainted;
+		chan->pool->n_launched_backends -= 1;
+		closesocket(chan->backend_socket);
+		pfree(chan->handshake_response);
+
+		if (chan->pool->pending_clients)
+		{
+			char* error;
+			/* Try to start new backend instead of terminated */
+			Channel* new_backend = backend_start(chan->pool, &error);
+			if (new_backend != NULL)
+			{
+				ELOG(LOG, "Spawn new backend %p instead of terminated %p", new_backend, chan);
+				backend_reschedule(new_backend, true);
+			}
+			else
+				free(error);
+		}
+	}
+	chan->magic = REMOVED_CHANNEL_MAGIC;
+	pfree(chan->buf);
+	pfree(chan);
+}
+
+
+
+/*
+ * Create new proxy.
+ */
+static Proxy*
+proxy_create(pgsocket postmaster_socket, ConnectionProxyState* state, int max_backends)
+{
+	HASHCTL ctl;
+	Proxy*	proxy;
+	MemoryContext proxy_memctx = AllocSetContextCreate(TopMemoryContext,
+													   "Proxy",
+													   ALLOCSET_DEFAULT_SIZES);
+	MemoryContextSwitchTo(proxy_memctx);
+	proxy = palloc0(sizeof(Proxy));
+	proxy->parse_ctx = AllocSetContextCreate(proxy_memctx,
+											 "Startup packet parsing context",
+											 ALLOCSET_DEFAULT_SIZES);
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(SessionPoolKey);
+	ctl.entrysize = sizeof(SessionPool);
+	ctl.hcxt = proxy_memctx;
+	proxy->pools = hash_create("Pool by database and user", DB_HASH_SIZE,
+							   &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+	 /* We need events both for clients and backends so multiply MaxConnection by two */
+	proxy->wait_events = CreateWaitEventSet(TopMemoryContext, MaxSessions*2);
+	AddWaitEventToSet(proxy->wait_events, WL_SOCKET_READABLE,
+					  postmaster_socket, NULL, NULL);
+	proxy->max_backends = max_backends;
+	proxy->state = state;
+	return proxy;
+}
+
+/*
+ * Main proxy loop
+ */
+static void
+proxy_loop(Proxy* proxy)
+{
+	int i, n_ready;
+	WaitEvent ready[MAX_READY_EVENTS];
+	Channel *chan, *next;
+
+	/* Main loop */
+	while (!proxy->shutdown)
+	{
+		/* Use timeout to allow normal proxy shutdown */
+		int wait_timeout = IdlePoolWorkerTimeout ? IdlePoolWorkerTimeout : PROXY_WAIT_TIMEOUT;
+		n_ready = WaitEventSetWait(proxy->wait_events, wait_timeout, ready, MAX_READY_EVENTS, PG_WAIT_CLIENT);
+		for (i = 0; i < n_ready; i++) {
+			chan = (Channel*)ready[i].user_data;
+			if (chan == NULL) /* new connection from postmaster */
+			{
+				Port* port = (Port*)palloc0(sizeof(Port));
+				port->sock = pg_recv_sock(ready[i].fd);
+				if (port->sock == PGINVALID_SOCKET)
+				{
+					elog(WARNING, "Failed to receive session socket: %m");
+					pfree(port);
+				}
+				else
+				{
+#if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+					port->gss = (pg_gssinfo *)palloc0(sizeof(pg_gssinfo));
+					if (!port->gss)
+						ereport(FATAL,
+								(errcode(ERRCODE_OUT_OF_MEMORY),
+								 errmsg("out of memory")));
+#endif
+					proxy_add_client(proxy, port);
+				}
+			}
+			/*
+			 * epoll may return event for already closed session if
+			 * socket is still openned. From epoll documentation: Q6
+			 * Will closing a file descriptor cause it to be removed
+			 * from all epoll sets automatically?
+			 *
+			 * A6  Yes, but be aware of the following point.  A file
+			 * descriptor is a reference to an open file description
+			 * (see open(2)).  Whenever a descriptor is duplicated via
+			 * dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new
+			 * file descriptor referring to the same open file
+			 * description is created.  An open file  description
+			 * continues  to exist until  all  file  descriptors
+			 * referring to it have been closed.  A file descriptor is
+			 * removed from an epoll set only after all the file
+			 * descriptors referring to the underlying open file
+			 * description  have been closed  (or  before  if  the
+			 * descriptor is explicitly removed using epoll_ctl(2)
+			 * EPOLL_CTL_DEL).  This means that even after a file
+			 * descriptor that is part of an epoll set has been
+			 * closed, events may be reported  for that  file
+			 * descriptor  if  other  file descriptors referring to
+			 * the same underlying file description remain open.
+			 *
+			 * Using this check for valid magic field we try to ignore
+			 * such events.
+			 */
+			else if (chan->magic == ACTIVE_CHANNEL_MAGIC)
+			{
+				if (ready[i].events & WL_SOCKET_WRITEABLE) {
+					ELOG(LOG, "Channel %p is writable", chan);
+					channel_write(chan, false);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && (chan->peer == NULL || chan->peer->tx_size == 0)) /* nothing to write */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable writable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_READABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+				if (ready[i].events & WL_SOCKET_READABLE) {
+					ELOG(LOG, "Channel %p is readable", chan);
+					channel_read(chan);
+					if (chan->magic == ACTIVE_CHANNEL_MAGIC && chan->tx_size != 0) /* pending write: read is not prohibited */
+					{
+						/* At systems not supporting epoll edge triggering (Win32, FreeBSD, MacOS), we need to disable readable event to avoid busy loop */
+						ModifyWaitEvent(chan->proxy->wait_events, chan->event_pos, WL_SOCKET_WRITEABLE | WL_SOCKET_EDGE, NULL);
+						chan->edge_triggered = true;
+					}
+				}
+			}
+		}
+		if (IdlePoolWorkerTimeout)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+			TimestampTz timeout_usec = IdlePoolWorkerTimeout*1000;
+			if (proxy->last_idle_timeout_check + timeout_usec < now)
+			{
+				HASH_SEQ_STATUS seq;
+				struct SessionPool* pool;
+				proxy->last_idle_timeout_check = now;
+				hash_seq_init(&seq, proxy->pools);
+				while ((pool = hash_seq_search(&seq)) != NULL)
+				{
+					for (chan = pool->idle_backends; chan != NULL; chan = chan->next)
+					{
+						if (chan->backend_last_activity + timeout_usec < now)
+						{
+							chan->is_interrupted = true; /* interrupted flags makes channel_write to send 'X' message */
+							channel_write(chan, false);
+						}
+					}
+				}
+			}
+		}
+
+		/*
+		 * Delayed deallocation of disconnected channels.
+		 * We can not delete channels immediately because of presence of peer events.
+		 */
+		for (chan = proxy->hangout; chan != NULL; chan = next)
+		{
+			next = chan->next;
+			channel_remove(chan);
+		}
+		proxy->hangout = NULL;
+	}
+}
+
+/*
+ * Handle normal shutdown of Postgres instance
+ */
+static void
+proxy_handle_sigterm(SIGNAL_ARGS)
+{
+	if (proxy)
+		proxy->shutdown = true;
+}
+
+#ifdef EXEC_BACKEND
+static pid_t
+proxy_forkexec(void)
+{
+	char	   *av[10];
+	int			ac = 0;
+
+	av[ac++] = "postgres";
+	av[ac++] = "--forkproxy";
+	av[ac++] = NULL;			/* filled in by postmaster_forkexec */
+	av[ac] = NULL;
+
+	Assert(ac < lengthof(av));
+
+	return postmaster_forkexec(ac, av);
+}
+#endif
+
+NON_EXEC_STATIC void
+ConnectionProxyMain(int argc, char *argv[])
+{
+	sigjmp_buf	local_sigjmp_buf;
+
+	/* Identify myself via ps */
+	init_ps_display("connection proxy");
+
+	SetProcessingMode(InitProcessing);
+
+	pqsignal(SIGTERM, proxy_handle_sigterm);
+	pqsignal(SIGQUIT, quickdie);
+	InitializeTimeouts();		/* establishes SIGALRM handler */
+
+	/* Early initialization */
+	BaseInit();
+
+	/*
+	 * Create a per-backend PGPROC struct in shared memory, except in the
+	 * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do
+	 * this before we can use LWLocks (and in the EXEC_BACKEND case we already
+	 * had to do some stuff with LWLocks).
+	 */
+#ifndef EXEC_BACKEND
+	InitProcess();
+#endif
+
+	/*
+	 * If an exception is encountered, processing resumes here.
+	 *
+	 * See notes in postgres.c about the design of this coding.
+	 */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Prevents interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log */
+		EmitErrorReport();
+
+		/*
+		 * We can now go away.	Note that because we called InitProcess, a
+		 * callback was registered to do ProcKill, which will clean up
+		 * necessary state.
+		 */
+		proc_exit(0);
+	}
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	PG_SETMASK(&UnBlockSig);
+
+	proxy = proxy_create(MyProxySocket, &ProxyState[MyProxyId], SessionPoolSize);
+	proxy_loop(proxy);
+
+	proc_exit(0);
+}
+
+/*
+ * Function for launching proxy by postmaster.
+ * This "boilerplate" code is taken from another auxiliary workers.
+ * In future it may be replaced with background worker.
+ * The main problem with background worker is how to pass socket to it and obtains its PID.
+ */
+int
+ConnectionProxyStart()
+{
+	pid_t		worker_pid;
+
+#ifdef EXEC_BACKEND
+	switch ((worker_pid = proxy_forkexec()))
+#else
+	switch ((worker_pid = fork_process()))
+#endif
+	{
+		case -1:
+			ereport(LOG,
+					(errmsg("could not fork proxy worker process: %m")));
+			return 0;
+
+#ifndef EXEC_BACKEND
+		case 0:
+			/* in postmaster child ... */
+			InitPostmasterChild();
+
+			ConnectionProxyMain(0, NULL);
+			break;
+#endif
+		default:
+		  elog(LOG, "Start proxy process %d", (int) worker_pid);
+		  return (int) worker_pid;
+	}
+
+	/* shouldn't get here */
+	return 0;
+}
+
+/*
+ * We need some place in shared memory to provide information about proxies state.
+ */
+int ConnectionProxyShmemSize(void)
+{
+	return ConnectionProxiesNumber*sizeof(ConnectionProxyState);
+}
+
+void ConnectionProxyShmemInit(void)
+{
+	bool found;
+	ProxyState = (ConnectionProxyState*)ShmemInitStruct("connection proxy contexts",
+														ConnectionProxyShmemSize(), &found);
+	if (!found)
+		memset(ProxyState, 0, ConnectionProxyShmemSize());
+}
+
+PG_FUNCTION_INFO_V1(pg_pooler_state);
+
+typedef struct
+{
+	int proxy_id;
+	TupleDesc ret_desc;
+} PoolerStateContext;
+
+/**
+ * Return information about proxies state.
+ * This set-returning functions returns the following columns:
+ *
+ * pid			  - proxy process identifier
+ * n_clients	  - number of clients connected to proxy
+ * n_ssl_clients  - number of clients using SSL protocol
+ * n_pools		  - number of pools (role/dbname combinations) maintained by proxy
+ * n_backends	  - total number of backends spawned by this proxy (including tainted)
+ * n_dedicated_backends - number of tainted backend
+ * tx_bytes		  - amount of data sent from backends to clients
+ * rx_bytes		  - amount of data sent from client to backends
+ * n_transactions - number of transaction proceeded by all backends of this proxy
+ */
+Datum pg_pooler_state(PG_FUNCTION_ARGS)
+{
+	FuncCallContext* srf_ctx;
+	MemoryContext old_context;
+	PoolerStateContext* ps_ctx;
+	HeapTuple tuple;
+	Datum values[11];
+	bool  nulls[11];
+	int id;
+	int i;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		srf_ctx = SRF_FIRSTCALL_INIT();
+		old_context = MemoryContextSwitchTo(srf_ctx->multi_call_memory_ctx);
+		ps_ctx = (PoolerStateContext*)palloc(sizeof(PoolerStateContext));
+		get_call_result_type(fcinfo, NULL, &ps_ctx->ret_desc);
+		ps_ctx->proxy_id = 0;
+		srf_ctx->user_fctx = ps_ctx;
+		MemoryContextSwitchTo(old_context);
+	}
+	srf_ctx = SRF_PERCALL_SETUP();
+	ps_ctx = srf_ctx->user_fctx;
+	id = ps_ctx->proxy_id;
+	if (id == ConnectionProxiesNumber)
+		SRF_RETURN_DONE(srf_ctx);
+
+	values[0] = Int32GetDatum(ProxyState[id].pid);
+	values[1] = Int32GetDatum(ProxyState[id].n_clients);
+	values[2] = Int32GetDatum(ProxyState[id].n_ssl_clients);
+	values[3] = Int32GetDatum(ProxyState[id].n_pools);
+	values[4] = Int32GetDatum(ProxyState[id].n_backends);
+	values[5] = Int32GetDatum(ProxyState[id].n_dedicated_backends);
+	values[6] = Int32GetDatum(ProxyState[id].n_idle_backends);
+	values[7] = Int32GetDatum(ProxyState[id].n_idle_clients);
+	values[8] = Int64GetDatum(ProxyState[id].tx_bytes);
+	values[9] = Int64GetDatum(ProxyState[id].rx_bytes);
+	values[10] = Int64GetDatum(ProxyState[id].n_transactions);
+
+	for (i = 0; i < 11; i++)
+		nulls[i] = false;
+
+	ps_ctx->proxy_id += 1;
+	tuple = heap_form_tuple(ps_ctx->ret_desc, values, nulls);
+	SRF_RETURN_NEXT(srf_ctx, HeapTupleGetDatum(tuple));
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 3e4ec53a97..a05150cbc5 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -29,6 +29,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/proxy.h"
 #include "replication/logicallauncher.h"
 #include "replication/origin.h"
 #include "replication/slot.h"
@@ -153,6 +154,7 @@ CreateSharedMemoryAndSemaphores(void)
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
+		size = add_size(size, ConnectionProxyShmemSize());
 
 		/* freeze the addin request size and include it */
 		addin_request_allowed = false;
@@ -261,6 +263,7 @@ CreateSharedMemoryAndSemaphores(void)
 	WalRcvShmemInit();
 	PgArchShmemInit();
 	ApplyLauncherShmemInit();
+	ConnectionProxyShmemInit();
 
 	/*
 	 * Set up other modules that need some shared memory space
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 43a5fded10..5b0c92bc19 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -81,15 +81,30 @@
 #error "no wait set implementation available"
 #endif
 
-#ifdef WAIT_USE_EPOLL
+#if defined(WAIT_USE_EPOLL)
 #include <sys/signalfd.h>
+bool WaitEventUseEpoll = true;
+#else
+bool WaitEventUseEpoll = false;
 #endif
 
+/*
+ * Connection pooler needs to delete events from event set.
+ * As far as we have too preserve positions of all other events,
+ * we can not move events. So we have to maintain list of free events.
+ * But poll/WaitForMultipleObjects manipulates with array of listened events.
+ * That is why elements in pollfds and handle arrays should be stored without holes
+ * and we need to maintain mapping between them and WaitEventSet events.
+ * This mapping is stored in "permutation" array. Also we need backward mapping
+ * (from event to descriptors array) which is implemented using "index" field of WaitEvent.
+ */
+
 /* typedef in latch.h */
 struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1. */
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -97,6 +112,8 @@ struct WaitEventSet
 	 */
 	WaitEvent  *events;
 
+	int        *permutation;    /* indexes of used events (see comment above) */
+
 	/*
 	 * If WL_LATCH_SET is specified in any wait event, latch is a pointer to
 	 * said latch, and latch_pos the offset in the ->events array. This is
@@ -174,9 +191,9 @@ static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action
 #elif defined(WAIT_USE_KQUEUE)
 static void WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -695,6 +712,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	 */
 	sz += MAXALIGN(sizeof(WaitEventSet));
 	sz += MAXALIGN(sizeof(WaitEvent) * nevents);
+	sz += MAXALIGN(sizeof(int) * nevents);
 
 #if defined(WAIT_USE_EPOLL)
 	sz += MAXALIGN(sizeof(struct epoll_event) * nevents);
@@ -715,23 +733,23 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	set->events = (WaitEvent *) data;
 	data += MAXALIGN(sizeof(WaitEvent) * nevents);
 
+	set->permutation = (int *) data;
+	data += MAXALIGN(sizeof(int) * nevents);
+
 #if defined(WAIT_USE_EPOLL)
 	set->epoll_ret_events = (struct epoll_event *) data;
-	data += MAXALIGN(sizeof(struct epoll_event) * nevents);
 #elif defined(WAIT_USE_KQUEUE)
 	set->kqueue_ret_events = (struct kevent *) data;
-	data += MAXALIGN(sizeof(struct kevent) * nevents);
 #elif defined(WAIT_USE_POLL)
 	set->pollfds = (struct pollfd *) data;
-	data += MAXALIGN(sizeof(struct pollfd) * nevents);
 #elif defined(WAIT_USE_WIN32)
-	set->handles = (HANDLE) data;
-	data += MAXALIGN(sizeof(HANDLE) * nevents);
+	set->handles = (HANDLE*) data;
 #endif
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
 	set->exit_on_postmaster_death = false;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 	if (!AcquireExternalFD())
@@ -804,12 +822,11 @@ FreeWaitEventSet(WaitEventSet *set)
 	close(set->kqueue_fd);
 	ReleaseExternalFD();
 #elif defined(WAIT_USE_WIN32)
-	WaitEvent  *cur_event;
+	int i;
 
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
+		WaitEvent* cur_event = &set->events[set->permutation[i]];
 		if (cur_event->events & WL_LATCH_SET)
 		{
 			/* uses the latch's HANDLE */
@@ -822,7 +839,7 @@ FreeWaitEventSet(WaitEventSet *set)
 		{
 			/* Clean up the event object we created for the socket */
 			WSAEventSelect(cur_event->fd, NULL, 0);
-			WSACloseEvent(set->handles[cur_event->pos + 1]);
+			WSACloseEvent(set->handles[cur_event->index + 1]);
 		}
 	}
 #endif
@@ -863,9 +880,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (events == WL_EXIT_ON_PM_DEATH)
 	{
@@ -892,8 +911,20 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->permutation[set->nevents] = event->pos;
+	event->index = set->nevents++;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -929,14 +960,40 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, 0);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
+/*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+	if (--set->nevents != 0)
+	{
+		set->permutation[event->index] = set->permutation[set->nevents];
+		set->events[set->permutation[set->nevents]].index = event->index;
+	}
+	event->fd = PGINVALID_SOCKET;
+	event->events = 0;
+	event->index = -1;
+	event->pos = set->free_events;
+	set->free_events = event_pos;
+}
+
+
 /*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.  The latch may be changed to NULL to disable the latch
@@ -952,13 +1009,19 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 	int			old_events;
 #endif
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 #if defined(WAIT_USE_KQUEUE)
 	old_events = event->events;
 #endif
 
+#if defined(WAIT_USE_EPOLL)
+	/* ModifyWaitEvent is used to emulate epoll EPOLLET (edge-triggered) flag */
+	if (events & WL_SOCKET_EDGE)
+		return;
+#endif
+
 	/*
 	 * If neither the event mask nor the associated latch changes, return
 	 * early. That's an important optimization for some sockets, where
@@ -1009,9 +1072,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #elif defined(WAIT_USE_KQUEUE)
 	WaitEventAdjustKqueue(set, event, old_events);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -1049,6 +1112,8 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 			epoll_ev.events |= EPOLLIN;
 		if (event->events & WL_SOCKET_WRITEABLE)
 			epoll_ev.events |= EPOLLOUT;
+		if (event->events & WL_SOCKET_EDGE)
+			epoll_ev.events |= EPOLLET;
 	}
 
 	/*
@@ -1057,11 +1122,10 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
-		/* translator: %s is a syscall name, such as "poll()" */
+				 /* translator: %s is a syscall name, such as "poll()" */
 				 errmsg("%s failed: %m",
 						"epoll_ctl()")));
 }
@@ -1069,11 +1133,16 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	struct pollfd *pollfd = &set->pollfds[event->index];
+
+	if (remove)
+	{
+		*pollfd = set->pollfds[set->nevents - 1]; /* nevents is not decremented yet */
+		return;
+	}
 
-	pollfd->revents = 0;
 	pollfd->fd = event->fd;
 
 	/* prepare pollfd entry once */
@@ -1252,9 +1321,21 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	HANDLE	   *handle = &set->handles[event->index + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		*handle = set->handles[set->nevents]; /* nevents is not decremented yet but we need to add 1 to the index */
+		set->handles[set->nevents] = WSA_INVALID_EVENT;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -1716,11 +1797,12 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 {
 	int			returned_events = 0;
 	int			rc;
-	WaitEvent  *cur_event;
-	struct pollfd *cur_pollfd;
+	int			i;
+	struct pollfd *cur_pollfd = set->pollfds;
+	WaitEvent* cur_event;
 
 	/* Sleep */
-	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+	rc = poll(cur_pollfd, set->nevents, (int) cur_timeout);
 
 	/* Check return code */
 	if (rc < 0)
@@ -1743,15 +1825,13 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 		return -1;
 	}
 
-	for (cur_event = set->events, cur_pollfd = set->pollfds;
-		 cur_event < (set->events + set->nevents) &&
-		 returned_events < nevents;
-		 cur_event++, cur_pollfd++)
+	for (i = 0; i < set->nevents && returned_events < nevents; i++, cur_pollfd++)
 	{
 		/* no activity on this FD, skip */
 		if (cur_pollfd->revents == 0)
 			continue;
 
+		cur_event = &set->events[set->permutation[i]];
 		occurred_events->pos = cur_event->pos;
 		occurred_events->user_data = cur_event->user_data;
 		occurred_events->events = 0;
@@ -1842,17 +1922,25 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 					  WaitEvent *occurred_events, int nevents)
 {
 	int			returned_events = 0;
+	int			i;
 	DWORD		rc;
-	WaitEvent  *cur_event;
+	WaitEvent*	cur_event;
 
 	/* Reset any wait events that need it */
-	for (cur_event = set->events;
-		 cur_event < (set->events + set->nevents);
-		 cur_event++)
+	for (i = 0; i < set->nevents; i++)
 	{
-		if (cur_event->reset)
+		cur_event = &set->events[set->permutation[i]];
+
+		/*
+		 * I have problem at Windows when SSPI connections "hanged" in WaitForMultipleObjects which
+		 * doesn't signal presence of input data (while it is possible to read this data from the socket).
+		 * Looks like "reset" logic is not completely correct (resetting event just after
+		 * receiveing presious read event). Reseting all read events fixes this problem.
+		 */
+		if (cur_event->events & WL_SOCKET_READABLE)
+		/* if (cur_event->reset) */
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
@@ -1918,7 +2006,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	 * With an offset of one, due to the always present pgwin32_signal_event,
 	 * the handle offset directly corresponds to a wait event.
 	 */
-	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+	cur_event = (WaitEvent *) &set->events[set->permutation[rc - WAIT_OBJECT_0 - 1]];
 
 	occurred_events->pos = cur_event->pos;
 	occurred_events->user_data = cur_event->user_data;
@@ -1963,7 +2051,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	else if (cur_event->events & WL_SOCKET_MASK)
 	{
 		WSANETWORKEVENTS resEvents;
-		HANDLE		handle = set->handles[cur_event->pos + 1];
+		HANDLE		handle = set->handles[cur_event->index + 1];
 
 		Assert(cur_event->fd);
 
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 108b4d9023..718c0ae9fd 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -812,7 +812,10 @@ LockAcquireExtended(const LOCKTAG *locktag,
 
 	/* Identify owner for lock */
 	if (sessionLock)
+	{
 		owner = NULL;
+		MyProc->is_tainted = true;
+	}
 	else
 		owner = CurrentResourceOwner;
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 897045ee27..eadb87260c 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -394,6 +394,7 @@ InitProcess(void)
 	MyProc->roleId = InvalidOid;
 	MyProc->tempNamespaceId = InvalidOid;
 	MyProc->isBackgroundWorker = IsBackgroundWorker;
+	MyProc->is_tainted = false;
 	MyProc->delayChkpt = false;
 	MyProc->statusFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 2b1b68109f..e0d4bb7800 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4383,6 +4383,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (RestartPoolerOnReload && strcmp(application_name, "pool_worker") == 0)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index 97f0265c12..841e746bb9 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -18,6 +18,7 @@
 #include "funcapi.h"
 #include "miscadmin.h"
 #include "storage/predicate_internals.h"
+#include "storage/proc.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
 
@@ -675,12 +676,14 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
  *	field4: 1 if using an int8 key, 2 if using 2 int4 keys
  */
 #define SET_LOCKTAG_INT64(tag, key64) \
+	MyProc->is_tainted = true; \
 	SET_LOCKTAG_ADVISORY(tag, \
 						 MyDatabaseId, \
 						 (uint32) ((key64) >> 32), \
 						 (uint32) (key64), \
 						 1)
 #define SET_LOCKTAG_INT32(tag, key1, key2) \
+	MyProc->is_tainted = true; \
 	SET_LOCKTAG_ADVISORY(tag, MyDatabaseId, key1, key2, 2)
 
 /*
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 73e0a672ae..a35e4a33a8 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -132,9 +132,15 @@ int			max_parallel_maintenance_workers = 2;
  */
 int			NBuffers = 1000;
 int			MaxConnections = 90;
+int			SessionPoolSize = 0;
+int			IdlePoolWorkerTimeout = 0;
+int			ConnectionProxiesNumber = 0;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
+
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
+int			MaxSessions = 1000;
 
 int			VacuumCostPageHit = 1;	/* GUC parameters for vacuum */
 int			VacuumCostPageMiss = 2;
@@ -148,3 +154,6 @@ int64		VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
+bool        RestartPoolerOnReload = false;
+bool        ProxyingGUCs = false;
+bool        MultitenantProxy = false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 3b36a31a47..106d18ef7e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -488,6 +488,13 @@ const struct config_enum_entry ssl_protocol_versions_info[] = {
 StaticAssertDecl(lengthof(ssl_protocol_versions_info) == (PG_TLS1_3_VERSION + 2),
 				 "array length mismatch");
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
 static struct config_enum_entry recovery_init_sync_method_options[] = {
 	{"fsync", RECOVERY_INIT_SYNC_METHOD_FSYNC, false},
 #ifdef HAVE_SYNCFS
@@ -695,6 +702,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Builtin connection pool"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1390,6 +1399,36 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"proxying_gucs", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("Support setting parameters in connection pooler sessions."),
+		 NULL,
+		},
+		&ProxyingGUCs,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"multitenant_proxy", PGC_USERSET, CONN_POOLING,
+		 gettext_noop("One pool worker can serve clients with different roles"),
+		 NULL,
+		},
+		&MultitenantProxy,
+		false,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
@@ -2270,6 +2309,53 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+	{
+		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"idle_pool_worker_timeout", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Sets the maximum allowed duration of any idling connection pool worker."),
+			gettext_noop("A value of 0 turns off the timeout."),
+			GUC_UNIT_MS
+		},
+		&IdlePoolWorkerTimeout,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_proxies", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of connection proxies."),
+			gettext_noop("Postmaster spawns separate worker process for each proxy. Postmaster scatters connections between proxies using one of scheduling policies (round-robin, random, load-balancing)."
+						 "Each proxy launches its own subset of backends. So maximal number of non-tainted backends is "
+						 "session_pool_size*connection_proxies*databases*roles.")
+		},
+		&ConnectionProxiesNumber,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one connection proxy."
+						 "It can be greater than max_connections and actually be arbitrary large.")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* see max_connections */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
@@ -2328,6 +2414,16 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"proxy_port", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the TCP port for the connection pooler."),
+			NULL
+		},
+		&ProxyPortNumber,
+		6543, 1, 65535,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"unix_socket_permissions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the access permissions of the Unix-domain socket."),
@@ -4879,6 +4975,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"recovery_init_sync_method", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,
 			gettext_noop("Sets the method for synchronizing the data directory before crash recovery."),
@@ -8582,6 +8688,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt, bool isTopLevel)
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot set parameters during a parallel operation")));
 
+	if (!stmt->is_local)
+		MyProc->is_tainted = true;
+
 	switch (stmt->kind)
 	{
 		case VAR_SET_VALUE:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 86425965d0..214232a4b5 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -780,6 +780,19 @@
 #include_if_exists = '...'		# include file only if it exists
 #include = '...'			# include file
 
+#------------------------------------------------------------------------------
+# BUILTIN CONNECTION PROXY
+#------------------------------------------------------------------------------
+
+#proxy_port = 6543              # TCP port for the connection pooler
+#connection_proxies = 0         # number of connection proxies. Setting it to non-zero value enables builtin connection proxy.
+#idle_pool_worker_timeout = 0   # maximum allowed duration of any idling connection pool worker.
+#session_pool_size = 10         # number of backends serving client sessions.
+#restart_pooler_on_reload = off # restart session pool workers on pg_reload_conf().
+#proxying_gucs = off            # support setting parameters in connection pooler sessions.
+#multitenant_proxy = off        # one pool worker can serve clients with different roles (otherwise separate pool is created for each database/role pair
+#max_sessions = 1000            # maximum number of client sessions which can be handled by one connection proxy.
+#session_schedule = 'round-robin' # session schedule policy for connection pool.
 
 #------------------------------------------------------------------------------
 # CUSTOMIZED OPTIONS
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index e259531f60..d93549d947 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -8170,7 +8170,7 @@
   proname => 'gist_poly_distance', prorettype => 'float8',
   proargtypes => 'internal polygon int2 oid internal',
   prosrc => 'gist_poly_distance' },
-{ oid => '3435', descr => 'sort support',
+{ oid => '6105', descr => 'sort support',
   proname => 'gist_point_sortsupport', prorettype => 'void',
   proargtypes => 'internal', prosrc => 'gist_point_sortsupport' },
 
@@ -11411,4 +11411,11 @@
   proname => 'is_normalized', prorettype => 'bool', proargtypes => 'text text',
   prosrc => 'unicode_is_normalized' },
 
+# builin connection pool
+{ oid => '3435', descr => 'information about connection pooler proxies workload',
+  proname => 'pg_pooler_state', prorows => '1000', proretset => 't',
+  provolatile => 'v', prorettype => 'record', proargtypes => '',
+  proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int8,int8,int8}', proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{pid,n_clients,n_ssl_clients,n_pools,n_backends,n_dedicated_backends,n_idle_backends,n_idle_clients,tx_bytes,rx_bytes,n_transactions}', prosrc => 'pg_pooler_state' },
+
 ]
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 30fb4e613d..a78d31a896 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -51,7 +51,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -60,6 +60,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index b20deeb555..83aae64872 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -56,7 +56,8 @@ extern WaitEventSet *FeBeWaitSet;
 
 extern int	StreamServerPort(int family, const char *hostName,
 							 unsigned short portNumber, const char *unixSocketDir,
-							 pgsocket ListenSocket[], int MaxListen);
+							 pgsocket ListenSocket[], int ListenPort[], int MaxListen);
+
 extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 013850ac28..b858859e4c 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -160,6 +160,22 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int IdlePoolWorkerTimeout;
+extern PGDLLIMPORT int ConnectionProxiesNumber;
+extern PGDLLIMPORT int SessionSchedule;
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT bool ProxyingGUCs;
+extern PGDLLIMPORT bool MultitenantProxy;
+
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
diff --git a/src/include/port.h b/src/include/port.h
index 227ef4b148..22901d0803 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index 05c5a53442..18d93ed275 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -464,6 +464,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -474,6 +475,7 @@ int			pgwin32_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *except
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 extern int	pgwin32_noblock;
 
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 0efdd7c232..e4f012751c 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -17,6 +17,7 @@
 extern bool EnableSSL;
 extern int	ReservedBackends;
 extern PGDLLIMPORT int PostPortNumber;
+extern PGDLLIMPORT int ProxyPortNumber;
 extern int	Unix_socket_permissions;
 extern char *Unix_socket_group;
 extern char *Unix_socket_directories;
@@ -47,6 +48,11 @@ extern int	postmaster_alive_fds[2];
 
 extern PGDLLIMPORT const char *progname;
 
+extern PGDLLIMPORT void* (*LibpqConnectdbParams)(char const* keywords[], char const* values[], char** errmsg);
+
+struct Proxy;
+struct Port;
+
 extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn();
 extern void ClosePostmasterPorts(bool am_syslogger);
 extern void InitProcessGlobals(void);
@@ -63,6 +69,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+extern int  ParseStartupPacket(struct Port* port, MemoryContext memctx, void* pkg_body, int pkg_size, bool ssl_done, bool gss_done);
+extern int	BackendStartup(struct Port* port, int* backend_pid);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/postmaster/proxy.h b/src/include/postmaster/proxy.h
new file mode 100644
index 0000000000..254d0f099e
--- /dev/null
+++ b/src/include/postmaster/proxy.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * proxy.h
+ *	  Exports from postmaster/proxy.c.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/postmaster/proxy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PROXY_H
+#define _PROXY_H
+
+/*
+ * Information in share dmemory about connection proxy state (used for session scheduling and monitoring)
+ */
+typedef struct ConnectionProxyState
+{
+	int pid;                  /* proxy worker pid */
+	int n_clients;            /* total number of clients */
+	int n_ssl_clients;        /* number of clients using SSL connection */
+	int n_pools;              /* nubmer of dbname/role combinations */
+	int n_backends;           /* totatal number of launched backends */
+	int n_dedicated_backends; /* number of tainted backends */
+	int n_idle_backends;      /* number of idle backends */
+	int n_idle_clients;       /* number of idle clients */
+	uint64 tx_bytes;          /* amount of data sent to client */
+	uint64 rx_bytes;          /* amount of data send to server */
+	uint64 n_transactions;    /* total number of proroceeded transactions */
+} ConnectionProxyState;
+
+extern ConnectionProxyState* ProxyState;
+extern PGDLLIMPORT int MyProxyId;
+extern PGDLLIMPORT pgsocket MyProxySocket;
+
+extern int  ConnectionProxyStart(void);
+extern int  ConnectionProxyShmemSize(void);
+extern void ConnectionProxyShmemInit(void);
+#ifdef EXEC_BACKEND
+extern void ConnectionProxyMain(int argc, char *argv[]);
+#endif
+
+#endif
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 9e94fcaec2..91956bbe93 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -134,9 +134,11 @@ typedef struct Latch
 /* avoid having to deal with case on platforms not requiring it */
 #define WL_SOCKET_CONNECTED  WL_SOCKET_WRITEABLE
 #endif
+#define WL_SOCKET_EDGE       (1 << 7)
 
 #define WL_SOCKET_MASK		(WL_SOCKET_READABLE | \
 							 WL_SOCKET_WRITEABLE | \
+							 WL_SOCKET_EDGE | \
 							 WL_SOCKET_CONNECTED)
 
 typedef struct WaitEvent
@@ -144,12 +146,15 @@ typedef struct WaitEvent
 	int			pos;			/* position in the event data structure */
 	uint32		events;			/* triggered events */
 	pgsocket	fd;				/* socket fd associated with event */
+	int         index;          /* position of correspondent element in descriptors array (for poll() and win32 implementation */
 	void	   *user_data;		/* pointer provided in AddWaitEventToSet */
 #ifdef WIN32
 	bool		reset;			/* Is reset of the event required? */
 #endif
 } WaitEvent;
 
+extern bool WaitEventUseEpoll;
+
 /* forward declaration to avoid exposing latch.c implementation details */
 typedef struct WaitEventSet WaitEventSet;
 
@@ -180,4 +185,6 @@ extern int	WaitLatchOrSocket(Latch *latch, int wakeEvents,
 							  pgsocket sock, long timeout, uint32 wait_event_info);
 extern void InitializeLatchWaitSet(void);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 #endif							/* LATCH_H */
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 2fd1ff09a7..7fc26f0476 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -251,6 +251,8 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	bool        is_tainted;            /* backend has modified session GUCs, use temporary tables, prepare statements, ... */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index b9b5c1adda..631d151032 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 6374ec657a..06622796c3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -59,7 +59,7 @@
 #include <security.h>
 #undef SECURITY_WIN32
 
-#ifndef ENABLE_GSS
+#if !defined(ENABLE_GSS) && !defined(GSS_BUFFER_STUB_DEFINED)
 /*
  * Define a fake structure compatible with GSSAPI on Unix.
  */
@@ -68,6 +68,7 @@ typedef struct
 	void	   *value;
 	int			length;
 } gss_buffer_desc;
+#define GSS_BUFFER_STUB_DEFINED
 #endif
 #endif							/* ENABLE_SSPI */
 
diff --git a/src/makefiles/Makefile.cygwin b/src/makefiles/Makefile.cygwin
index 81089d6257..fed76be9e0 100644
--- a/src/makefiles/Makefile.cygwin
+++ b/src/makefiles/Makefile.cygwin
@@ -18,6 +18,7 @@ override CPPFLAGS += -DWIN32_STACK_RLIMIT=$(WIN32_STACK_RLIMIT)
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/makefiles/Makefile.win32 b/src/makefiles/Makefile.win32
index e72cb2db0e..183c8de2ce 100644
--- a/src/makefiles/Makefile.win32
+++ b/src/makefiles/Makefile.win32
@@ -16,6 +16,7 @@ DLSUFFIX = .dll
 ifneq (,$(findstring backend,$(subdir)))
 ifeq (,$(findstring conversion_procs,$(subdir)))
 ifeq (,$(findstring libpqwalreceiver,$(subdir)))
+ifeq (,$(findstring libpqconn,$(subdir)))
 ifeq (,$(findstring replication/pgoutput,$(subdir)))
 ifeq (,$(findstring snowball,$(subdir)))
 override CPPFLAGS+= -DBUILDING_DLL
diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile
index 95e4bc8228..ca96a92954 100644
--- a/src/test/regress/GNUmakefile
+++ b/src/test/regress/GNUmakefile
@@ -123,6 +123,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS)
 
 check: all
 	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)
+	$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) --port=6543 --temp-config=$(srcdir)/conn_proxy.conf
 
 check-tests: all | temp-install
 	$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)
diff --git a/src/test/regress/conn_proxy.conf b/src/test/regress/conn_proxy.conf
new file mode 100644
index 0000000000..ebaa257f4b
--- /dev/null
+++ b/src/test/regress/conn_proxy.conf
@@ -0,0 +1,3 @@
+connection_proxies = 1
+port = 5432
+log_statement=all
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index a184404e21..2c8e8e5278 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -168,6 +168,7 @@ sub mkvcbuild
 
 	$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');
 	$postgres->AddIncludeDir('src/backend');
+	$postgres->AddIncludeDir('src/port');
 	$postgres->AddDir('src/backend/port/win32');
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
@@ -279,6 +280,12 @@ sub mkvcbuild
 	$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
 	$libpqwalreceiver->AddReference($postgres, $libpq);
 
+	my $libpqconn =
+	  $solution->AddProject('libpqconn', 'dll', '',
+		'src/backend/postmaster/libpqconn');
+	$libpqconn->AddIncludeDir('src/interfaces/libpq');
+	$libpqconn->AddReference($postgres, $libpq);
+
 	my $pgoutput = $solution->AddProject('pgoutput', 'dll', '',
 		'src/backend/replication/pgoutput');
 	$pgoutput->AddReference($postgres);
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index 0cc91e7d6c..6eeb2e7090 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -19,6 +19,7 @@ if exist pgsql.suo del /q /a:H pgsql.suo
 del /s /q src\bin\win32ver.rc 2> NUL
 del /s /q src\interfaces\win32ver.rc 2> NUL
 if exist src\backend\win32ver.rc del /q src\backend\win32ver.rc
+if exist src\backend\postmaster\libpqconn\win32ver.rc del /q src\backend\postmaster\libpqconn\win32ver.rc
 if exist src\backend\replication\libpqwalreceiver\win32ver.rc del /q src\backend\replication\libpqwalreceiver\win32ver.rc
 if exist src\backend\replication\pgoutput\win32ver.rc del /q src\backend\replication\pgoutput\win32ver.rc
 if exist src\backend\snowball\win32ver.rc del /q src\backend\snowball\win32ver.rc
#71Zhihong Yu
zyu@yugabyte.com
In reply to: Konstantin Knizhnik (#70)
Re: Built-in connection pooler

Hi,

+          With <literal>load-balancing</literal> policy postmaster choose
proxy with lowest load average.
+          Load average of proxy is estimated by number of clients
connection assigned to this proxy with extra weight for SSL connections.

I think 'load-balanced' may be better than 'load-balancing'.
postmaster choose proxy -> postmaster chooses proxy

+ Load average of proxy is estimated by number of clients
connection assigned to this proxy with extra weight for SSL connections.

I wonder if there would be a mixture of connections with and without SSL.

+ Terminate an idle connection pool worker after the specified
number of milliseconds.

Should the time unit be seconds ? It seems a worker would exist for at
least a second.

+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group

It would be better to update the year in the header.

+ * Use then for launching pooler worker backends and report error

Not sure I understand the above sentence. Did you mean 'them' instead of
'then' ?

Cheers

On Sun, Mar 21, 2021 at 11:32 AM Konstantin Knizhnik <knizhnik@garret.ru>
wrote:

Show quoted text

People asked me to resubmit built-in connection pooler patch to commitfest.
Rebased version of connection pooler is attached.

#72Konstantin Knizhnik
knizhnik@garret.ru
In reply to: Zhihong Yu (#71)
Re: Built-in connection pooler

Hi,
Thank you for review!

On 21.03.2021 23:59, Zhihong Yu wrote:

Hi,

+          With <literal>load-balancing</literal> policy postmaster 
choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients 
connection assigned to this proxy with extra weight for SSL connections.

I think 'load-balanced' may be better than 'load-balancing'.

Sorry, I am not a native speaker.
But it seems to me (based on the articles I have read), then
"load-balancing" is more widely used term:

https://en.wikipedia.org/wiki/Load_balancing_(computing)

postmaster choose proxy -> postmaster chooses proxy

Fixed.

+          Load average of proxy is estimated by number of clients
connection assigned to this proxy with extra weight for SSL connections.

I wonder if there would be a mixture of connections with and without SSL.

Why not? And what is wrong with it?

+         Terminate an idle connection pool worker after the specified
number of milliseconds.

Should the time unit be seconds ? It seems a worker would exist for at
least a second.

Most of other similar timeouts: statement timeout, session timeout...
are specified in milliseconds.

+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group

It would be better to update the year in the header.

Fixed.

+        * Use then for launching pooler worker backends and report error

Not sure I understand the above sentence. Did you mean 'them' instead
of 'then' ?

Sorry, it is really mistyping.
"them" should be used.
Fixed.

#73Antonin Houska
ah@cybertec.at
In reply to: Konstantin Knizhnik (#70)
Re: Built-in connection pooler

Konstantin Knizhnik <knizhnik@garret.ru> wrote:

People asked me to resubmit built-in connection pooler patch to commitfest.
Rebased version of connection pooler is attached.

I've reviewd the patch but haven't read the entire thread thoroughly. I hope
that I don't duplicate many comments posted earlier by others.

(Please note that the patch does not apply to the current master, I had to
reset the head of my repository to the appropriate commit.)

Documentation / user interface
------------------------------

* session_pool_size (config.sgml)

I wonder if

"The default value is 10, so up to 10 backends will serve each database,"

should rather be

"The default value is 10, so up to 10 backends will serve each database/user combination."

However, when I read the code, I think that each proxy counts the size of the
pool separately, so the total number of backends used for particular
database/user combination seems to be

session_pool_size * connection_proxies

Since the feature uses shared memory statistics anyway, shouldn't it only
count the total number of backends per database/user? It would need some
locking, but the actual pools (hash tables) could still be local to the proxy
processes.

* connection_proxies

(I've noticed that Ryan Lambert questioned this variable upthread.)

I think this variable makes the configuration less straightforward from the
user perspective. Cannot the server launch additional proxies dynamically, as
needed, e.g. based on the shared memory statistics that the patch introduces?
I see that postmaster would have to send the sockets in a different way. How
about adding a "proxy launcher" process that would take care of the scheduling
and launching new proxies?

* multitenant_proxy

I thought the purpose of this setting is to reduce the number of backends
needed, but could not find evidence in the code. In particular,
client_attach() always retrieves the backend from the appropriate pool, and
backend_reschedule() does so as well. Thus the role of both client and backend
should always match. What piece of information do I miss?

* typo (2 occurrences in config.sgml)

"stanalone" -> "standalone"

Design / coding
---------------

* proxy.c:backend_start() does not change the value of the "host" parameter to
the socket directory, so I assume the proxy connects to the backend via TCP
protocol. I think the unix socket should be preferred for this connection if
the platform has it, however:

* is libpq necessary for the proxy to connect to backend at all? Maybe
postgres.c:ReadCommand() can be adjusted so that the backend can communicate
with the proxy just via the plain socket.

I don't like the idea of server components communicating via libpq (do we
need anything else of the libpq connection than the socket?) as such, but
especially these includes in proxy.c look weird:

#include "../interfaces/libpq/libpq-fe.h"
#include "../interfaces/libpq/libpq-int.h"

* How does the proxy recognize connections to the walsender? I haven't tested
that, but it's obvious that these connections should not be proxied.

* ConnectionProxyState is in shared memory, so access to its fields should be
synchronized.

* StartConnectionProxies() is only called from PostmasterMain(), so I'm not
sure the proxies get restarted after crash. Perhaps PostmasterStateMachine()
needs to call it too after calling StartupDataBase().

* Why do you need the Channel.magic integer field? Wouldn't a boolean field
"active" be sufficient?

** In proxy_loop(), I've noticed tests (chan->magic == ACTIVE_CHANNEL_MAGIC)
tests inside the branch

else if (chan->magic == ACTIVE_CHANNEL_MAGIC)

Since neither channel_write() nor channel_read() seem to change the
value, I think those tests are not necessary.

* Comment lines are often too long.

* pgindent should be applied to the patch at some point.

I can spend more time reviewing the patch during the next CF.

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#74Li Japin
japinli@hotmail.com
In reply to: Zhihong Yu (#71)
Re: Built-in connection pooler

On Mar 22, 2021, at 4:59 AM, Zhihong Yu <zyu@yugabyte.com<mailto:zyu@yugabyte.com>> wrote:

Hi,

+          With <literal>load-balancing</literal> policy postmaster choose proxy with lowest load average.
+          Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.

I think 'load-balanced' may be better than 'load-balancing'.
postmaster choose proxy -> postmaster chooses proxy

+ Load average of proxy is estimated by number of clients connection assigned to this proxy with extra weight for SSL connections.

I wonder if there would be a mixture of connections with and without SSL.

+ Terminate an idle connection pool worker after the specified number of milliseconds.

Should the time unit be seconds ? It seems a worker would exist for at least a second.

+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group

It would be better to update the year in the header.

+ * Use then for launching pooler worker backends and report error

Not sure I understand the above sentence. Did you mean 'them' instead of 'then' ?

Cheers

On Sun, Mar 21, 2021 at 11:32 AM Konstantin Knizhnik <knizhnik@garret.ru<mailto:knizhnik@garret.ru>> wrote:
People asked me to resubmit built-in connection pooler patch to commitfest.
Rebased version of connection pooler is attached.

Hi, hackers

Does the PostgreSQL core do not interested in the built-in connection pool? If so, could
somebody tell me why we do not need it? If not, how can we do for this to make it in core?

Thanks in advance!

[Sorry if you already receive this email, since I typo an invalid pgsql list email address in previous email.]

--
Best regards
Japin Li